Test Report: KVM_Linux_crio 16865

                    
                      e527c943862622d235c52d3f78f307a89288bf9f:2023-08-17:30622
                    
                

Test fail (26/300)

Order failed test Duration
32 TestAddons/parallel/Ingress 157.69
43 TestAddons/StoppedEnableDisable 155.37
82 TestFunctional/serial/LogsFileCmd 1.49
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 168.08
207 TestMultiNode/serial/PingHostFrom2Pods 3.23
213 TestMultiNode/serial/RestartKeepsNodes 684.9
215 TestMultiNode/serial/StopMultiNode 142.84
222 TestPreload 192.52
228 TestRunningBinaryUpgrade 145.57
254 TestStoppedBinaryUpgrade/Upgrade 335.48
328 TestStartStop/group/old-k8s-version/serial/Stop 140.12
330 TestStartStop/group/no-preload/serial/Stop 139.82
333 TestStartStop/group/embed-certs/serial/Stop 139.97
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.21
337 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
338 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
341 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.62
346 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.47
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.45
348 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.42
349 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 332.88
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 352.1
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 368.78
352 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 111.94
x
+
TestAddons/parallel/Ingress (157.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-696435 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-696435 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:208: (dbg) Done: kubectl --context addons-696435 replace --force -f testdata/nginx-ingress-v1.yaml: (1.23083879s)
addons_test.go:221: (dbg) Run:  kubectl --context addons-696435 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [943e028b-6339-40cb-ba80-2824903afc67] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [943e028b-6339-40cb-ba80-2824903afc67] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.033099262s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-696435 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.109775969s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-696435 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:262: (dbg) Done: kubectl --context addons-696435 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.045805328s)
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.18
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-696435 addons disable ingress-dns --alsologtostderr -v=1: (1.657612959s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-696435 addons disable ingress --alsologtostderr -v=1: (7.96032213s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-696435 -n addons-696435
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-696435 logs -n 25: (1.266583348s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | -p download-only-936342           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | -p download-only-936342           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | -p download-only-936342           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:10 UTC |
	| delete  | -p download-only-936342           | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:10 UTC |
	| delete  | -p download-only-936342           | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:10 UTC |
	| start   | --download-only -p                | binary-mirror-984914 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | binary-mirror-984914              |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --binary-mirror                   |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40607            |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-984914           | binary-mirror-984914 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:10 UTC |
	| start   | -p addons-696435                  | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:13 UTC |
	|         | --wait=true --memory=4000         |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --addons=registry                 |                      |         |         |                     |                     |
	|         | --addons=metrics-server           |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --addons=ingress                  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns              |                      |         |         |                     |                     |
	|         | --addons=helm-tiller              |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | addons-696435                     |                      |         |         |                     |                     |
	| addons  | addons-696435 addons              | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | disable metrics-server            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p       | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | addons-696435                     |                      |         |         |                     |                     |
	| addons  | addons-696435 addons disable      | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | helm-tiller --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                   | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | -p addons-696435                  |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                      |         |         |                     |                     |
	| ip      | addons-696435 ip                  | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	| addons  | addons-696435 addons disable      | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | registry --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                              |                      |         |         |                     |                     |
	| ssh     | addons-696435 ssh curl -s         | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:       |                      |         |         |                     |                     |
	|         | nginx.example.com'                |                      |         |         |                     |                     |
	| addons  | addons-696435 addons              | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:14 UTC | 17 Aug 23 21:14 UTC |
	|         | disable csi-hostpath-driver       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                      |         |         |                     |                     |
	| addons  | addons-696435 addons              | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:14 UTC | 17 Aug 23 21:14 UTC |
	|         | disable volumesnapshots           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                      |         |         |                     |                     |
	| ip      | addons-696435 ip                  | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:15 UTC |
	| addons  | addons-696435 addons disable      | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:15 UTC |
	|         | ingress-dns --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                              |                      |         |         |                     |                     |
	| addons  | addons-696435 addons disable      | addons-696435        | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:15 UTC |
	|         | ingress --alsologtostderr -v=1    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:10:44
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:10:44.086351  211031 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:10:44.086477  211031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:44.086487  211031 out.go:309] Setting ErrFile to fd 2...
	I0817 21:10:44.086491  211031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:44.086738  211031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 21:10:44.087477  211031 out.go:303] Setting JSON to false
	I0817 21:10:44.088356  211031 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21169,"bootTime":1692285475,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:10:44.088423  211031 start.go:138] virtualization: kvm guest
	I0817 21:10:44.091116  211031 out.go:177] * [addons-696435] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:10:44.092692  211031 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:10:44.092742  211031 notify.go:220] Checking for updates...
	I0817 21:10:44.094237  211031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:10:44.096424  211031 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:10:44.098233  211031 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:10:44.099699  211031 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:10:44.101207  211031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:10:44.102768  211031 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:10:44.135937  211031 out.go:177] * Using the kvm2 driver based on user configuration
	I0817 21:10:44.137404  211031 start.go:298] selected driver: kvm2
	I0817 21:10:44.137419  211031 start.go:902] validating driver "kvm2" against <nil>
	I0817 21:10:44.137445  211031 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:10:44.138425  211031 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:10:44.138522  211031 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 21:10:44.154194  211031 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 21:10:44.154247  211031 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:10:44.154460  211031 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:10:44.154502  211031 cni.go:84] Creating CNI manager for ""
	I0817 21:10:44.154515  211031 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 21:10:44.154524  211031 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0817 21:10:44.154533  211031 start_flags.go:319] config:
	{Name:addons-696435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-696435 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:44.154671  211031 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:10:44.156827  211031 out.go:177] * Starting control plane node addons-696435 in cluster addons-696435
	I0817 21:10:44.158274  211031 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:10:44.158329  211031 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:44.158379  211031 cache.go:57] Caching tarball of preloaded images
	I0817 21:10:44.158494  211031 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 21:10:44.158506  211031 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:10:44.158862  211031 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/config.json ...
	I0817 21:10:44.158891  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/config.json: {Name:mk1975192bdeb45256d2dd91a26ef312071a2d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:10:44.159063  211031 start.go:365] acquiring machines lock for addons-696435: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 21:10:44.159127  211031 start.go:369] acquired machines lock for "addons-696435" in 45.169µs
	I0817 21:10:44.159149  211031 start.go:93] Provisioning new machine with config: &{Name:addons-696435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:a
ddons-696435 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:10:44.159240  211031 start.go:125] createHost starting for "" (driver="kvm2")
	I0817 21:10:44.161353  211031 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0817 21:10:44.161498  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:10:44.161548  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:10:44.177225  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I0817 21:10:44.177875  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:10:44.178675  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:10:44.178713  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:10:44.179089  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:10:44.179285  211031 main.go:141] libmachine: (addons-696435) Calling .GetMachineName
	I0817 21:10:44.179429  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:10:44.179635  211031 start.go:159] libmachine.API.Create for "addons-696435" (driver="kvm2")
	I0817 21:10:44.179670  211031 client.go:168] LocalClient.Create starting
	I0817 21:10:44.179720  211031 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem
	I0817 21:10:44.403911  211031 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem
	I0817 21:10:44.486637  211031 main.go:141] libmachine: Running pre-create checks...
	I0817 21:10:44.486667  211031 main.go:141] libmachine: (addons-696435) Calling .PreCreateCheck
	I0817 21:10:44.487246  211031 main.go:141] libmachine: (addons-696435) Calling .GetConfigRaw
	I0817 21:10:44.487805  211031 main.go:141] libmachine: Creating machine...
	I0817 21:10:44.487824  211031 main.go:141] libmachine: (addons-696435) Calling .Create
	I0817 21:10:44.487999  211031 main.go:141] libmachine: (addons-696435) Creating KVM machine...
	I0817 21:10:44.489587  211031 main.go:141] libmachine: (addons-696435) DBG | found existing default KVM network
	I0817 21:10:44.490554  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:44.490354  211053 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d7a0}
	I0817 21:10:44.496784  211031 main.go:141] libmachine: (addons-696435) DBG | trying to create private KVM network mk-addons-696435 192.168.39.0/24...
	I0817 21:10:44.573441  211031 main.go:141] libmachine: (addons-696435) DBG | private KVM network mk-addons-696435 192.168.39.0/24 created
	I0817 21:10:44.573479  211031 main.go:141] libmachine: (addons-696435) Setting up store path in /home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435 ...
	I0817 21:10:44.573513  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:44.573446  211053 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:10:44.573538  211031 main.go:141] libmachine: (addons-696435) Building disk image from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0817 21:10:44.573702  211031 main.go:141] libmachine: (addons-696435) Downloading /home/jenkins/minikube-integration/16865-203458/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0817 21:10:44.806808  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:44.806635  211053 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa...
	I0817 21:10:44.952395  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:44.952227  211053 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/addons-696435.rawdisk...
	I0817 21:10:44.952455  211031 main.go:141] libmachine: (addons-696435) DBG | Writing magic tar header
	I0817 21:10:44.952468  211031 main.go:141] libmachine: (addons-696435) DBG | Writing SSH key tar header
	I0817 21:10:44.952476  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:44.952368  211053 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435 ...
	I0817 21:10:44.952488  211031 main.go:141] libmachine: (addons-696435) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435
	I0817 21:10:44.952497  211031 main.go:141] libmachine: (addons-696435) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435 (perms=drwx------)
	I0817 21:10:44.952508  211031 main.go:141] libmachine: (addons-696435) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines
	I0817 21:10:44.952520  211031 main.go:141] libmachine: (addons-696435) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:10:44.952527  211031 main.go:141] libmachine: (addons-696435) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458
	I0817 21:10:44.952539  211031 main.go:141] libmachine: (addons-696435) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0817 21:10:44.952551  211031 main.go:141] libmachine: (addons-696435) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines (perms=drwxr-xr-x)
	I0817 21:10:44.952557  211031 main.go:141] libmachine: (addons-696435) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube (perms=drwxr-xr-x)
	I0817 21:10:44.952580  211031 main.go:141] libmachine: (addons-696435) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458 (perms=drwxrwxr-x)
	I0817 21:10:44.952600  211031 main.go:141] libmachine: (addons-696435) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0817 21:10:44.952609  211031 main.go:141] libmachine: (addons-696435) DBG | Checking permissions on dir: /home/jenkins
	I0817 21:10:44.952624  211031 main.go:141] libmachine: (addons-696435) DBG | Checking permissions on dir: /home
	I0817 21:10:44.952633  211031 main.go:141] libmachine: (addons-696435) DBG | Skipping /home - not owner
	I0817 21:10:44.952656  211031 main.go:141] libmachine: (addons-696435) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0817 21:10:44.952665  211031 main.go:141] libmachine: (addons-696435) Creating domain...
	I0817 21:10:44.953952  211031 main.go:141] libmachine: (addons-696435) define libvirt domain using xml: 
	I0817 21:10:44.953990  211031 main.go:141] libmachine: (addons-696435) <domain type='kvm'>
	I0817 21:10:44.954028  211031 main.go:141] libmachine: (addons-696435)   <name>addons-696435</name>
	I0817 21:10:44.954047  211031 main.go:141] libmachine: (addons-696435)   <memory unit='MiB'>4000</memory>
	I0817 21:10:44.954069  211031 main.go:141] libmachine: (addons-696435)   <vcpu>2</vcpu>
	I0817 21:10:44.954075  211031 main.go:141] libmachine: (addons-696435)   <features>
	I0817 21:10:44.954107  211031 main.go:141] libmachine: (addons-696435)     <acpi/>
	I0817 21:10:44.954128  211031 main.go:141] libmachine: (addons-696435)     <apic/>
	I0817 21:10:44.954136  211031 main.go:141] libmachine: (addons-696435)     <pae/>
	I0817 21:10:44.954145  211031 main.go:141] libmachine: (addons-696435)     
	I0817 21:10:44.954152  211031 main.go:141] libmachine: (addons-696435)   </features>
	I0817 21:10:44.954158  211031 main.go:141] libmachine: (addons-696435)   <cpu mode='host-passthrough'>
	I0817 21:10:44.954164  211031 main.go:141] libmachine: (addons-696435)   
	I0817 21:10:44.954168  211031 main.go:141] libmachine: (addons-696435)   </cpu>
	I0817 21:10:44.954174  211031 main.go:141] libmachine: (addons-696435)   <os>
	I0817 21:10:44.954180  211031 main.go:141] libmachine: (addons-696435)     <type>hvm</type>
	I0817 21:10:44.954188  211031 main.go:141] libmachine: (addons-696435)     <boot dev='cdrom'/>
	I0817 21:10:44.954194  211031 main.go:141] libmachine: (addons-696435)     <boot dev='hd'/>
	I0817 21:10:44.954200  211031 main.go:141] libmachine: (addons-696435)     <bootmenu enable='no'/>
	I0817 21:10:44.954209  211031 main.go:141] libmachine: (addons-696435)   </os>
	I0817 21:10:44.954216  211031 main.go:141] libmachine: (addons-696435)   <devices>
	I0817 21:10:44.954221  211031 main.go:141] libmachine: (addons-696435)     <disk type='file' device='cdrom'>
	I0817 21:10:44.954231  211031 main.go:141] libmachine: (addons-696435)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/boot2docker.iso'/>
	I0817 21:10:44.954240  211031 main.go:141] libmachine: (addons-696435)       <target dev='hdc' bus='scsi'/>
	I0817 21:10:44.954246  211031 main.go:141] libmachine: (addons-696435)       <readonly/>
	I0817 21:10:44.954254  211031 main.go:141] libmachine: (addons-696435)     </disk>
	I0817 21:10:44.954261  211031 main.go:141] libmachine: (addons-696435)     <disk type='file' device='disk'>
	I0817 21:10:44.954268  211031 main.go:141] libmachine: (addons-696435)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0817 21:10:44.954277  211031 main.go:141] libmachine: (addons-696435)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/addons-696435.rawdisk'/>
	I0817 21:10:44.954288  211031 main.go:141] libmachine: (addons-696435)       <target dev='hda' bus='virtio'/>
	I0817 21:10:44.954298  211031 main.go:141] libmachine: (addons-696435)     </disk>
	I0817 21:10:44.954303  211031 main.go:141] libmachine: (addons-696435)     <interface type='network'>
	I0817 21:10:44.954312  211031 main.go:141] libmachine: (addons-696435)       <source network='mk-addons-696435'/>
	I0817 21:10:44.954317  211031 main.go:141] libmachine: (addons-696435)       <model type='virtio'/>
	I0817 21:10:44.954323  211031 main.go:141] libmachine: (addons-696435)     </interface>
	I0817 21:10:44.954329  211031 main.go:141] libmachine: (addons-696435)     <interface type='network'>
	I0817 21:10:44.954335  211031 main.go:141] libmachine: (addons-696435)       <source network='default'/>
	I0817 21:10:44.954342  211031 main.go:141] libmachine: (addons-696435)       <model type='virtio'/>
	I0817 21:10:44.954375  211031 main.go:141] libmachine: (addons-696435)     </interface>
	I0817 21:10:44.954394  211031 main.go:141] libmachine: (addons-696435)     <serial type='pty'>
	I0817 21:10:44.954401  211031 main.go:141] libmachine: (addons-696435)       <target port='0'/>
	I0817 21:10:44.954407  211031 main.go:141] libmachine: (addons-696435)     </serial>
	I0817 21:10:44.954415  211031 main.go:141] libmachine: (addons-696435)     <console type='pty'>
	I0817 21:10:44.954422  211031 main.go:141] libmachine: (addons-696435)       <target type='serial' port='0'/>
	I0817 21:10:44.954430  211031 main.go:141] libmachine: (addons-696435)     </console>
	I0817 21:10:44.954436  211031 main.go:141] libmachine: (addons-696435)     <rng model='virtio'>
	I0817 21:10:44.954446  211031 main.go:141] libmachine: (addons-696435)       <backend model='random'>/dev/random</backend>
	I0817 21:10:44.954451  211031 main.go:141] libmachine: (addons-696435)     </rng>
	I0817 21:10:44.954459  211031 main.go:141] libmachine: (addons-696435)     
	I0817 21:10:44.954464  211031 main.go:141] libmachine: (addons-696435)     
	I0817 21:10:44.954472  211031 main.go:141] libmachine: (addons-696435)   </devices>
	I0817 21:10:44.954477  211031 main.go:141] libmachine: (addons-696435) </domain>
	I0817 21:10:44.954486  211031 main.go:141] libmachine: (addons-696435) 
	I0817 21:10:44.959481  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:a2:6f:29 in network default
	I0817 21:10:44.960137  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:44.960168  211031 main.go:141] libmachine: (addons-696435) Ensuring networks are active...
	I0817 21:10:44.960907  211031 main.go:141] libmachine: (addons-696435) Ensuring network default is active
	I0817 21:10:44.961242  211031 main.go:141] libmachine: (addons-696435) Ensuring network mk-addons-696435 is active
	I0817 21:10:44.961936  211031 main.go:141] libmachine: (addons-696435) Getting domain xml...
	I0817 21:10:44.962657  211031 main.go:141] libmachine: (addons-696435) Creating domain...
	I0817 21:10:46.227778  211031 main.go:141] libmachine: (addons-696435) Waiting to get IP...
	I0817 21:10:46.229359  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:46.230132  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:46.230254  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:46.230124  211053 retry.go:31] will retry after 302.72324ms: waiting for machine to come up
	I0817 21:10:46.535134  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:46.535646  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:46.535682  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:46.535595  211053 retry.go:31] will retry after 287.140639ms: waiting for machine to come up
	I0817 21:10:46.824194  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:46.824793  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:46.824823  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:46.824698  211053 retry.go:31] will retry after 401.206574ms: waiting for machine to come up
	I0817 21:10:47.227540  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:47.228048  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:47.228080  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:47.228006  211053 retry.go:31] will retry after 525.282822ms: waiting for machine to come up
	I0817 21:10:47.754730  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:47.755127  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:47.755160  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:47.755063  211053 retry.go:31] will retry after 487.332393ms: waiting for machine to come up
	I0817 21:10:48.243738  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:48.244147  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:48.244194  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:48.244055  211053 retry.go:31] will retry after 838.553013ms: waiting for machine to come up
	I0817 21:10:49.083858  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:49.084193  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:49.084224  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:49.084169  211053 retry.go:31] will retry after 846.564049ms: waiting for machine to come up
	I0817 21:10:49.932139  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:49.932588  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:49.932635  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:49.932543  211053 retry.go:31] will retry after 1.114524757s: waiting for machine to come up
	I0817 21:10:51.048908  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:51.049278  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:51.049321  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:51.049227  211053 retry.go:31] will retry after 1.777358278s: waiting for machine to come up
	I0817 21:10:52.827934  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:52.828385  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:52.828436  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:52.828328  211053 retry.go:31] will retry after 1.906516904s: waiting for machine to come up
	I0817 21:10:54.736146  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:54.736606  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:54.736639  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:54.736543  211053 retry.go:31] will retry after 1.81375431s: waiting for machine to come up
	I0817 21:10:56.552552  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:56.552995  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:56.553024  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:56.552952  211053 retry.go:31] will retry after 2.948734926s: waiting for machine to come up
	I0817 21:10:59.503900  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:10:59.504228  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:10:59.504253  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:10:59.504206  211053 retry.go:31] will retry after 4.024311081s: waiting for machine to come up
	I0817 21:11:03.533551  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:03.534014  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find current IP address of domain addons-696435 in network mk-addons-696435
	I0817 21:11:03.534045  211031 main.go:141] libmachine: (addons-696435) DBG | I0817 21:11:03.533950  211053 retry.go:31] will retry after 5.390315384s: waiting for machine to come up
	I0817 21:11:08.929452  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:08.929861  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has current primary IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:08.929917  211031 main.go:141] libmachine: (addons-696435) Found IP for machine: 192.168.39.18
	I0817 21:11:08.929944  211031 main.go:141] libmachine: (addons-696435) Reserving static IP address...
	I0817 21:11:08.930284  211031 main.go:141] libmachine: (addons-696435) DBG | unable to find host DHCP lease matching {name: "addons-696435", mac: "52:54:00:9a:f5:12", ip: "192.168.39.18"} in network mk-addons-696435
	I0817 21:11:09.008165  211031 main.go:141] libmachine: (addons-696435) DBG | Getting to WaitForSSH function...
	I0817 21:11:09.008221  211031 main.go:141] libmachine: (addons-696435) Reserved static IP address: 192.168.39.18
	I0817 21:11:09.008236  211031 main.go:141] libmachine: (addons-696435) Waiting for SSH to be available...
	I0817 21:11:09.010931  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.011396  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:09.011440  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.011537  211031 main.go:141] libmachine: (addons-696435) DBG | Using SSH client type: external
	I0817 21:11:09.011571  211031 main.go:141] libmachine: (addons-696435) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa (-rw-------)
	I0817 21:11:09.011628  211031 main.go:141] libmachine: (addons-696435) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 21:11:09.011663  211031 main.go:141] libmachine: (addons-696435) DBG | About to run SSH command:
	I0817 21:11:09.011678  211031 main.go:141] libmachine: (addons-696435) DBG | exit 0
	I0817 21:11:09.102253  211031 main.go:141] libmachine: (addons-696435) DBG | SSH cmd err, output: <nil>: 
	I0817 21:11:09.102534  211031 main.go:141] libmachine: (addons-696435) KVM machine creation complete!
	I0817 21:11:09.102865  211031 main.go:141] libmachine: (addons-696435) Calling .GetConfigRaw
	I0817 21:11:09.103439  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:09.103705  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:09.103854  211031 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0817 21:11:09.103870  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:09.105268  211031 main.go:141] libmachine: Detecting operating system of created instance...
	I0817 21:11:09.105283  211031 main.go:141] libmachine: Waiting for SSH to be available...
	I0817 21:11:09.105297  211031 main.go:141] libmachine: Getting to WaitForSSH function...
	I0817 21:11:09.105308  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:09.107696  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.108078  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:09.108104  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.108232  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:09.108438  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.108599  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.108761  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:09.108975  211031 main.go:141] libmachine: Using SSH client type: native
	I0817 21:11:09.109435  211031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0817 21:11:09.109449  211031 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0817 21:11:09.229512  211031 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:11:09.229539  211031 main.go:141] libmachine: Detecting the provisioner...
	I0817 21:11:09.229547  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:09.232566  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.233075  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:09.233122  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.233272  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:09.233493  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.233677  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.233862  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:09.234076  211031 main.go:141] libmachine: Using SSH client type: native
	I0817 21:11:09.234513  211031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0817 21:11:09.234531  211031 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0817 21:11:09.355199  211031 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0817 21:11:09.355349  211031 main.go:141] libmachine: found compatible host: buildroot
	I0817 21:11:09.355367  211031 main.go:141] libmachine: Provisioning with buildroot...
	I0817 21:11:09.355381  211031 main.go:141] libmachine: (addons-696435) Calling .GetMachineName
	I0817 21:11:09.355675  211031 buildroot.go:166] provisioning hostname "addons-696435"
	I0817 21:11:09.355703  211031 main.go:141] libmachine: (addons-696435) Calling .GetMachineName
	I0817 21:11:09.355940  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:09.358450  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.358801  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:09.358846  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.359047  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:09.359312  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.359489  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.359673  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:09.359840  211031 main.go:141] libmachine: Using SSH client type: native
	I0817 21:11:09.360436  211031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0817 21:11:09.360460  211031 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-696435 && echo "addons-696435" | sudo tee /etc/hostname
	I0817 21:11:09.490773  211031 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-696435
	
	I0817 21:11:09.490829  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:09.493947  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.494356  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:09.494386  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.494601  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:09.494846  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.495016  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.495196  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:09.495357  211031 main.go:141] libmachine: Using SSH client type: native
	I0817 21:11:09.495744  211031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0817 21:11:09.495762  211031 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-696435' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-696435/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-696435' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:11:09.622408  211031 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:11:09.622444  211031 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 21:11:09.622474  211031 buildroot.go:174] setting up certificates
	I0817 21:11:09.622486  211031 provision.go:83] configureAuth start
	I0817 21:11:09.622498  211031 main.go:141] libmachine: (addons-696435) Calling .GetMachineName
	I0817 21:11:09.622829  211031 main.go:141] libmachine: (addons-696435) Calling .GetIP
	I0817 21:11:09.625704  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.626244  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:09.626278  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.626419  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:09.628617  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.628988  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:09.629019  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.629146  211031 provision.go:138] copyHostCerts
	I0817 21:11:09.629220  211031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 21:11:09.629343  211031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 21:11:09.629416  211031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 21:11:09.629488  211031 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.addons-696435 san=[192.168.39.18 192.168.39.18 localhost 127.0.0.1 minikube addons-696435]
	I0817 21:11:09.766296  211031 provision.go:172] copyRemoteCerts
	I0817 21:11:09.766372  211031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:11:09.766401  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:09.769393  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.769709  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:09.769750  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.770032  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:09.770279  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.770517  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:09.770677  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:09.862642  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0817 21:11:09.888615  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:11:09.912688  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 21:11:09.936541  211031 provision.go:86] duration metric: configureAuth took 314.038329ms
	I0817 21:11:09.936570  211031 buildroot.go:189] setting minikube options for container-runtime
	I0817 21:11:09.936903  211031 config.go:182] Loaded profile config "addons-696435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:11:09.937055  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:09.939915  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.940271  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:09.940304  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:09.940510  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:09.940746  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.940984  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:09.941161  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:09.941335  211031 main.go:141] libmachine: Using SSH client type: native
	I0817 21:11:09.941933  211031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0817 21:11:09.941952  211031 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:11:10.440259  211031 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:11:10.440291  211031 main.go:141] libmachine: Checking connection to Docker...
	I0817 21:11:10.440313  211031 main.go:141] libmachine: (addons-696435) Calling .GetURL
	I0817 21:11:10.441802  211031 main.go:141] libmachine: (addons-696435) DBG | Using libvirt version 6000000
	I0817 21:11:10.443994  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.444437  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:10.444470  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.444629  211031 main.go:141] libmachine: Docker is up and running!
	I0817 21:11:10.444650  211031 main.go:141] libmachine: Reticulating splines...
	I0817 21:11:10.444658  211031 client.go:171] LocalClient.Create took 26.264978689s
	I0817 21:11:10.444684  211031 start.go:167] duration metric: libmachine.API.Create for "addons-696435" took 26.265051155s
	I0817 21:11:10.444692  211031 start.go:300] post-start starting for "addons-696435" (driver="kvm2")
	I0817 21:11:10.444701  211031 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:11:10.444733  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:10.445075  211031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:11:10.445117  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:10.447587  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.447957  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:10.447988  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.448120  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:10.448329  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:10.448545  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:10.448711  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:10.541190  211031 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:11:10.546423  211031 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 21:11:10.546452  211031 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 21:11:10.546527  211031 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 21:11:10.546555  211031 start.go:303] post-start completed in 101.857966ms
	I0817 21:11:10.546597  211031 main.go:141] libmachine: (addons-696435) Calling .GetConfigRaw
	I0817 21:11:10.578543  211031 main.go:141] libmachine: (addons-696435) Calling .GetIP
	I0817 21:11:10.581225  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.581966  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:10.582019  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.582420  211031 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/config.json ...
	I0817 21:11:10.640273  211031 start.go:128] duration metric: createHost completed in 26.481011097s
	I0817 21:11:10.640358  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:10.643837  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.644280  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:10.644339  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.644575  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:10.644881  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:10.645123  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:10.645337  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:10.645534  211031 main.go:141] libmachine: Using SSH client type: native
	I0817 21:11:10.645969  211031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0817 21:11:10.645982  211031 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 21:11:10.771665  211031 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692306670.755178512
	
	I0817 21:11:10.771707  211031 fix.go:206] guest clock: 1692306670.755178512
	I0817 21:11:10.771717  211031 fix.go:219] Guest: 2023-08-17 21:11:10.755178512 +0000 UTC Remote: 2023-08-17 21:11:10.640311914 +0000 UTC m=+26.590240750 (delta=114.866598ms)
	I0817 21:11:10.771741  211031 fix.go:190] guest clock delta is within tolerance: 114.866598ms
	I0817 21:11:10.771746  211031 start.go:83] releasing machines lock for "addons-696435", held for 26.612607309s
	I0817 21:11:10.771770  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:10.772068  211031 main.go:141] libmachine: (addons-696435) Calling .GetIP
	I0817 21:11:10.774647  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.775077  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:10.775113  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.775266  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:10.775809  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:10.776020  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:10.776123  211031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:11:10.776174  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:10.776250  211031 ssh_runner.go:195] Run: cat /version.json
	I0817 21:11:10.776278  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:10.779028  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.779062  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.779454  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:10.779490  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.779535  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:10.779555  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:10.779660  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:10.779820  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:10.779899  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:10.780021  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:10.780254  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:10.780299  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:10.780412  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:10.780576  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:10.891230  211031 ssh_runner.go:195] Run: systemctl --version
	I0817 21:11:10.897601  211031 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:11:11.067734  211031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 21:11:11.073879  211031 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 21:11:11.073967  211031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:11:11.090269  211031 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 21:11:11.090294  211031 start.go:466] detecting cgroup driver to use...
	I0817 21:11:11.090388  211031 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:11:11.104482  211031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:11:11.117120  211031 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:11:11.117212  211031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:11:11.130846  211031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:11:11.144003  211031 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:11:11.255771  211031 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:11:11.369356  211031 docker.go:212] disabling docker service ...
	I0817 21:11:11.369453  211031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:11:11.383542  211031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:11:11.396426  211031 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:11:11.503458  211031 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:11:11.604601  211031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:11:11.617560  211031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:11:11.634493  211031 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 21:11:11.634579  211031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:11:11.644139  211031 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:11:11.644219  211031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:11:11.653894  211031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:11:11.663736  211031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:11:11.673694  211031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:11:11.683683  211031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:11:11.692456  211031 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 21:11:11.692525  211031 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 21:11:11.705319  211031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:11:11.715977  211031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:11:11.823050  211031 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:11:11.998410  211031 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:11:11.998495  211031 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:11:12.003810  211031 start.go:534] Will wait 60s for crictl version
	I0817 21:11:12.003885  211031 ssh_runner.go:195] Run: which crictl
	I0817 21:11:12.007947  211031 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:11:12.044647  211031 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 21:11:12.044778  211031 ssh_runner.go:195] Run: crio --version
	I0817 21:11:12.095399  211031 ssh_runner.go:195] Run: crio --version
	I0817 21:11:12.149662  211031 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 21:11:12.151243  211031 main.go:141] libmachine: (addons-696435) Calling .GetIP
	I0817 21:11:12.153956  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:12.154344  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:12.154380  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:12.154564  211031 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 21:11:12.158992  211031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:11:12.171139  211031 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:11:12.171205  211031 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:11:12.204333  211031 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 21:11:12.204406  211031 ssh_runner.go:195] Run: which lz4
	I0817 21:11:12.208501  211031 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 21:11:12.212717  211031 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 21:11:12.212760  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 21:11:13.957078  211031 crio.go:444] Took 1.748625 seconds to copy over tarball
	I0817 21:11:13.957171  211031 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 21:11:16.756884  211031 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.799680534s)
	I0817 21:11:16.756922  211031 crio.go:451] Took 2.799806 seconds to extract the tarball
	I0817 21:11:16.756935  211031 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 21:11:16.797449  211031 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:11:16.851310  211031 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 21:11:16.851341  211031 cache_images.go:84] Images are preloaded, skipping loading
	I0817 21:11:16.851426  211031 ssh_runner.go:195] Run: crio config
	I0817 21:11:16.912180  211031 cni.go:84] Creating CNI manager for ""
	I0817 21:11:16.912215  211031 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 21:11:16.912244  211031 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:11:16.912270  211031 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-696435 NodeName:addons-696435 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:11:16.912449  211031 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-696435"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:11:16.912553  211031 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-696435 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-696435 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:11:16.912617  211031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:11:16.922409  211031 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:11:16.922491  211031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:11:16.932169  211031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0817 21:11:16.948725  211031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:11:16.965052  211031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0817 21:11:16.981110  211031 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0817 21:11:16.985246  211031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:11:16.998211  211031 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435 for IP: 192.168.39.18
	I0817 21:11:16.998249  211031 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:16.998416  211031 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 21:11:17.104695  211031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt ...
	I0817 21:11:17.104731  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt: {Name:mk4a80978888a1b018f1e9de7f258377115018df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.104904  211031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key ...
	I0817 21:11:17.104915  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key: {Name:mk907f21beb7c92621d047f7068144310eac8170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.104980  211031 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 21:11:17.417342  211031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt ...
	I0817 21:11:17.417381  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt: {Name:mkad8193b34086592ff72d771b479fc0872cf7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.417566  211031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key ...
	I0817 21:11:17.417577  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key: {Name:mkb1fc89878f131ce0498f2e401b00ac5b7006cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.417684  211031 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.key
	I0817 21:11:17.417698  211031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt with IP's: []
	I0817 21:11:17.610027  211031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt ...
	I0817 21:11:17.610073  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: {Name:mkde132c6e52a5bdad53560a425d603e65b70d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.610262  211031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.key ...
	I0817 21:11:17.610274  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.key: {Name:mk27f03162522429c545080b5541c756ba954241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.610344  211031 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.key.c202909e
	I0817 21:11:17.610362  211031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.crt.c202909e with IP's: [192.168.39.18 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 21:11:17.843654  211031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.crt.c202909e ...
	I0817 21:11:17.843694  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.crt.c202909e: {Name:mkc79be1ccedd8f40d5f3015c247b61669a10677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.843906  211031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.key.c202909e ...
	I0817 21:11:17.843923  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.key.c202909e: {Name:mk3b5213816aa2ad64c7f6853bba25efac487137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.844027  211031 certs.go:337] copying /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.crt.c202909e -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.crt
	I0817 21:11:17.844126  211031 certs.go:341] copying /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.key.c202909e -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.key
	I0817 21:11:17.844177  211031 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/proxy-client.key
	I0817 21:11:17.844194  211031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/proxy-client.crt with IP's: []
	I0817 21:11:17.951593  211031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/proxy-client.crt ...
	I0817 21:11:17.951626  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/proxy-client.crt: {Name:mk20ab073496ca1ee9da85477f2fcb38cea8f000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.951817  211031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/proxy-client.key ...
	I0817 21:11:17.951836  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/proxy-client.key: {Name:mk3611f0206c8366721b25812d4d22024fea8191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:17.952034  211031 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:11:17.952072  211031 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 21:11:17.952096  211031 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:11:17.952120  211031 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 21:11:17.952768  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:11:17.978173  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 21:11:18.002356  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:11:18.027022  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 21:11:18.052744  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:11:18.077171  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:11:18.101083  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:11:18.124263  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:11:18.148340  211031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:11:18.171898  211031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:11:18.188580  211031 ssh_runner.go:195] Run: openssl version
	I0817 21:11:18.194668  211031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:11:18.205081  211031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:11:18.209931  211031 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:11:18.210007  211031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:11:18.215896  211031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:11:18.226264  211031 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:11:18.230823  211031 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:11:18.230892  211031 kubeadm.go:404] StartCluster: {Name:addons-696435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-696435 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:11:18.231015  211031 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 21:11:18.231071  211031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:11:18.264781  211031 cri.go:89] found id: ""
	I0817 21:11:18.264875  211031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:11:18.274270  211031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:11:18.283311  211031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:11:18.292820  211031 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:11:18.292881  211031 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 21:11:18.483553  211031 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:11:30.540637  211031 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 21:11:30.540714  211031 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 21:11:30.540813  211031 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:11:30.540949  211031 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:11:30.541078  211031 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:11:30.541163  211031 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:11:30.542877  211031 out.go:204]   - Generating certificates and keys ...
	I0817 21:11:30.542983  211031 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 21:11:30.543064  211031 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 21:11:30.543156  211031 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:11:30.543249  211031 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:11:30.543349  211031 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0817 21:11:30.543429  211031 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0817 21:11:30.543521  211031 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0817 21:11:30.543693  211031 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-696435 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0817 21:11:30.543785  211031 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0817 21:11:30.543924  211031 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-696435 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0817 21:11:30.544022  211031 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:11:30.544115  211031 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:11:30.544179  211031 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0817 21:11:30.544262  211031 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:11:30.544313  211031 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:11:30.544379  211031 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:11:30.544478  211031 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:11:30.544558  211031 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:11:30.544719  211031 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:11:30.544841  211031 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:11:30.544896  211031 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 21:11:30.544982  211031 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:11:30.546695  211031 out.go:204]   - Booting up control plane ...
	I0817 21:11:30.546825  211031 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:11:30.546919  211031 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:11:30.547001  211031 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:11:30.547125  211031 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:11:30.547257  211031 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:11:30.547325  211031 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504450 seconds
	I0817 21:11:30.547408  211031 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:11:30.547567  211031 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:11:30.547637  211031 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:11:30.547891  211031 kubeadm.go:322] [mark-control-plane] Marking the node addons-696435 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 21:11:30.547989  211031 kubeadm.go:322] [bootstrap-token] Using token: fw1kf2.l5rfj34a3g2re8ne
	I0817 21:11:30.549587  211031 out.go:204]   - Configuring RBAC rules ...
	I0817 21:11:30.549736  211031 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:11:30.549854  211031 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:11:30.550046  211031 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:11:30.550230  211031 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:11:30.550404  211031 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:11:30.550518  211031 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:11:30.550698  211031 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:11:30.550758  211031 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 21:11:30.550829  211031 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 21:11:30.550840  211031 kubeadm.go:322] 
	I0817 21:11:30.550915  211031 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 21:11:30.550925  211031 kubeadm.go:322] 
	I0817 21:11:30.551040  211031 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 21:11:30.551055  211031 kubeadm.go:322] 
	I0817 21:11:30.551094  211031 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 21:11:30.551186  211031 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:11:30.551271  211031 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:11:30.551285  211031 kubeadm.go:322] 
	I0817 21:11:30.551365  211031 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 21:11:30.551383  211031 kubeadm.go:322] 
	I0817 21:11:30.551473  211031 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 21:11:30.551482  211031 kubeadm.go:322] 
	I0817 21:11:30.551553  211031 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 21:11:30.551649  211031 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:11:30.551737  211031 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:11:30.551750  211031 kubeadm.go:322] 
	I0817 21:11:30.551876  211031 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:11:30.551976  211031 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 21:11:30.551985  211031 kubeadm.go:322] 
	I0817 21:11:30.552083  211031 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fw1kf2.l5rfj34a3g2re8ne \
	I0817 21:11:30.552203  211031 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 21:11:30.552234  211031 kubeadm.go:322] 	--control-plane 
	I0817 21:11:30.552243  211031 kubeadm.go:322] 
	I0817 21:11:30.552343  211031 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:11:30.552351  211031 kubeadm.go:322] 
	I0817 21:11:30.552457  211031 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fw1kf2.l5rfj34a3g2re8ne \
	I0817 21:11:30.552590  211031 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 21:11:30.552603  211031 cni.go:84] Creating CNI manager for ""
	I0817 21:11:30.552612  211031 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 21:11:30.554485  211031 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 21:11:30.555965  211031 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 21:11:30.572571  211031 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 21:11:30.625622  211031 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 21:11:30.625741  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:30.625741  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=addons-696435 minikube.k8s.io/updated_at=2023_08_17T21_11_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:30.681262  211031 ops.go:34] apiserver oom_adj: -16
	I0817 21:11:30.882119  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:30.985528  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:31.601839  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:32.101145  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:32.601197  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:33.101732  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:33.601369  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:34.101424  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:34.601781  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:35.101793  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:35.601893  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:36.102131  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:36.602012  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:37.101328  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:37.601537  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:38.101904  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:38.601492  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:39.101470  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:39.601366  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:40.101970  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:40.602111  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:41.101619  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:41.602039  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:42.101846  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:42.602092  211031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:42.725844  211031 kubeadm.go:1081] duration metric: took 12.100169833s to wait for elevateKubeSystemPrivileges.
	I0817 21:11:42.725905  211031 kubeadm.go:406] StartCluster complete in 24.495003297s
	I0817 21:11:42.725932  211031 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:42.726133  211031 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:11:42.726669  211031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:42.726919  211031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:11:42.727055  211031 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0817 21:11:42.727175  211031 addons.go:69] Setting ingress=true in profile "addons-696435"
	I0817 21:11:42.727190  211031 addons.go:69] Setting storage-provisioner=true in profile "addons-696435"
	I0817 21:11:42.727205  211031 addons.go:231] Setting addon ingress=true in "addons-696435"
	I0817 21:11:42.727208  211031 addons.go:231] Setting addon storage-provisioner=true in "addons-696435"
	I0817 21:11:42.727201  211031 addons.go:69] Setting metrics-server=true in profile "addons-696435"
	I0817 21:11:42.727206  211031 addons.go:69] Setting ingress-dns=true in profile "addons-696435"
	I0817 21:11:42.727232  211031 addons.go:231] Setting addon ingress-dns=true in "addons-696435"
	I0817 21:11:42.727241  211031 addons.go:69] Setting inspektor-gadget=true in profile "addons-696435"
	I0817 21:11:42.727251  211031 addons.go:231] Setting addon inspektor-gadget=true in "addons-696435"
	I0817 21:11:42.727256  211031 config.go:182] Loaded profile config "addons-696435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:11:42.727271  211031 addons.go:69] Setting cloud-spanner=true in profile "addons-696435"
	I0817 21:11:42.727280  211031 addons.go:231] Setting addon cloud-spanner=true in "addons-696435"
	I0817 21:11:42.727287  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.727297  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.727259  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.727311  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.727313  211031 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-696435"
	I0817 21:11:42.727357  211031 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-696435"
	I0817 21:11:42.727391  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.727298  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.727745  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.727745  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.727766  211031 addons.go:69] Setting helm-tiller=true in profile "addons-696435"
	I0817 21:11:42.727266  211031 addons.go:69] Setting default-storageclass=true in profile "addons-696435"
	I0817 21:11:42.727788  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.727791  211031 addons.go:231] Setting addon helm-tiller=true in "addons-696435"
	I0817 21:11:42.727232  211031 addons.go:231] Setting addon metrics-server=true in "addons-696435"
	I0817 21:11:42.727790  211031 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-696435"
	I0817 21:11:42.727823  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.727832  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.727842  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.727866  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.727185  211031 addons.go:69] Setting registry=true in profile "addons-696435"
	I0817 21:11:42.727984  211031 addons.go:231] Setting addon registry=true in "addons-696435"
	I0817 21:11:42.728015  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.728017  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.728036  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.728161  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.728164  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.728181  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.728197  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.728218  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.727306  211031 addons.go:69] Setting gcp-auth=true in profile "addons-696435"
	I0817 21:11:42.728259  211031 mustload.go:65] Loading cluster: addons-696435
	I0817 21:11:42.728261  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.728242  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.728275  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.727769  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.727176  211031 addons.go:69] Setting volumesnapshots=true in profile "addons-696435"
	I0817 21:11:42.728362  211031 addons.go:231] Setting addon volumesnapshots=true in "addons-696435"
	I0817 21:11:42.728443  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.728468  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.728480  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.728498  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.728566  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.728600  211031 config.go:182] Loaded profile config "addons-696435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:11:42.728933  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.728965  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.748792  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I0817 21:11:42.749098  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0817 21:11:42.749402  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.749404  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0817 21:11:42.749567  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.750216  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.750239  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.750259  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.750333  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.750345  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.750782  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.750795  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.750925  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.750946  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.751385  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.751447  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.751483  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.751518  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.751553  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.751600  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
	I0817 21:11:42.751763  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.752783  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I0817 21:11:42.753244  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.753818  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.753840  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.754303  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.754345  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.754371  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.754886  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.754929  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.755140  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.756182  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.756204  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.756618  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.757191  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.757222  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.768728  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0817 21:11:42.769685  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.770372  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.770397  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.770816  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.771425  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.771473  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.786374  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
	I0817 21:11:42.786554  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39167
	I0817 21:11:42.786806  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0817 21:11:42.786935  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.787045  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.787243  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.787639  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.787664  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.787937  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.787954  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.788195  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I0817 21:11:42.788489  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.788504  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.788575  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.789298  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.789349  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.789590  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.789684  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.789779  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.789969  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.790279  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.790298  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.790735  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35011
	I0817 21:11:42.790878  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.790902  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.790944  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.790991  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0817 21:11:42.791417  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.791515  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.792526  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.792550  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.792721  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.792733  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.793204  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.793243  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.793390  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0817 21:11:42.793950  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.794045  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.794154  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.797277  211031 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:11:42.794443  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.795242  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.795934  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.799027  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.799128  211031 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:11:42.799143  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:11:42.799157  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.799162  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.799849  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.799904  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.800074  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.800392  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.800786  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0817 21:11:42.801487  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.802771  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.802794  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.803358  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.803669  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.803883  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.806040  211031 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0817 21:11:42.804849  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.805498  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.807716  211031 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0817 21:11:42.807730  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0817 21:11:42.807754  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.807820  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.807844  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.808027  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.808230  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.808752  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I0817 21:11:42.809001  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40247
	I0817 21:11:42.809262  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.810027  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.810153  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.810226  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.810644  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.810665  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.812913  211031 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0817 21:11:42.811230  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.811919  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.811953  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.812768  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.813639  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39903
	I0817 21:11:42.816722  211031 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0817 21:11:42.815163  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.815197  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.815462  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.815510  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.815574  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.818176  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.820361  211031 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0817 21:11:42.818486  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.819049  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.819083  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.820679  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.820778  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0817 21:11:42.822424  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.822596  211031 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0817 21:11:42.822612  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0817 21:11:42.822637  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.825201  211031 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0817 21:11:42.823557  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.823612  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.823647  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.823973  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.824247  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0817 21:11:42.826451  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.828631  211031 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0817 21:11:42.827464  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.827578  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.827618  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.827897  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.827952  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0817 21:11:42.828154  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.829140  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.832509  211031 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0817 21:11:42.830509  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
	I0817 21:11:42.830589  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.830611  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.830716  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.830988  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.831100  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.831471  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.832935  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.835889  211031 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0817 21:11:42.834269  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.834361  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.834550  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.834791  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.834846  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.834881  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.835690  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0817 21:11:42.838184  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.839397  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.839432  211031 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0817 21:11:42.839770  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.839789  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.840125  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.840840  211031 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0817 21:11:42.840866  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.840271  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.841118  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.841488  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.842769  211031 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0817 21:11:42.842878  211031 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0817 21:11:42.843826  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.844517  211031 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0817 21:11:42.844533  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0817 21:11:42.843880  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.843894  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.843856  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.844733  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.846160  211031 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0817 21:11:42.847782  211031 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0817 21:11:42.846403  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.847804  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0817 21:11:42.847827  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.846406  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.849292  211031 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0817 21:11:42.846456  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.846729  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.848362  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.851113  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.851195  211031 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0817 21:11:42.851847  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.852874  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.852930  211031 out.go:177]   - Using image docker.io/registry:2.8.1
	I0817 21:11:42.852966  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.852980  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0817 21:11:42.853270  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.854177  211031 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0817 21:11:42.854185  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.856066  211031 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 21:11:42.856085  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 21:11:42.856101  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.853596  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.854210  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.854225  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.854249  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.854809  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.854855  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.855893  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.856355  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.857557  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0817 21:11:42.858147  211031 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0817 21:11:42.858609  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.858634  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.859822  211031 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0817 21:11:42.861332  211031 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0817 21:11:42.859168  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.859889  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0817 21:11:42.860091  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.860119  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.861308  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.861883  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.861892  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.862423  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.862986  211031 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0817 21:11:42.863083  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0817 21:11:42.863112  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.863147  211031 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0817 21:11:42.863242  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.863676  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.863923  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.865823  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.863960  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.865857  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.864224  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.864796  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.865912  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.865752  211031 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0817 21:11:42.865965  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0817 21:11:42.865982  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.866307  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.866377  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.866427  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.866462  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.866628  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.866635  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.867324  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.867350  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.867403  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.867920  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.868106  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.868411  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.868632  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.869345  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.869952  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.869977  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.870316  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.870456  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.870596  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.870646  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.870786  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.871494  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.871511  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.871526  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.871662  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.871821  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.871936  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:42.897095  211031 addons.go:231] Setting addon default-storageclass=true in "addons-696435"
	I0817 21:11:42.897146  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:42.897438  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.897484  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.913484  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0817 21:11:42.914026  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.914582  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.914608  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.915042  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.915492  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:42.915521  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:42.931218  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0817 21:11:42.931723  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:42.932305  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:42.932339  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:42.932787  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:42.932983  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:42.934884  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:42.935180  211031 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 21:11:42.935198  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 21:11:42.935221  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:42.938807  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.939272  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:42.939316  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:42.939478  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:42.939716  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:42.939868  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:42.940071  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:43.095671  211031 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-696435" context rescaled to 1 replicas
	I0817 21:11:43.095721  211031 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:11:43.097997  211031 out.go:177] * Verifying Kubernetes components...
	I0817 21:11:43.099814  211031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:11:43.100000  211031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 21:11:43.242593  211031 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0817 21:11:43.242630  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0817 21:11:43.293292  211031 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0817 21:11:43.293333  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0817 21:11:43.308342  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 21:11:43.308665  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:11:43.310492  211031 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0817 21:11:43.310522  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0817 21:11:43.317992  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0817 21:11:43.323929  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0817 21:11:43.328898  211031 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0817 21:11:43.328925  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0817 21:11:43.334533  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0817 21:11:43.335892  211031 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0817 21:11:43.335912  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0817 21:11:43.353252  211031 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 21:11:43.353289  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0817 21:11:43.374040  211031 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0817 21:11:43.374082  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0817 21:11:43.413593  211031 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0817 21:11:43.413629  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0817 21:11:43.487026  211031 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0817 21:11:43.487062  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0817 21:11:43.525087  211031 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0817 21:11:43.525119  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0817 21:11:43.548215  211031 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0817 21:11:43.548240  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0817 21:11:43.576761  211031 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 21:11:43.576791  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 21:11:43.579328  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0817 21:11:43.592680  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0817 21:11:43.665952  211031 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0817 21:11:43.665981  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0817 21:11:43.684533  211031 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0817 21:11:43.684561  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0817 21:11:43.742108  211031 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 21:11:43.742138  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 21:11:43.764242  211031 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0817 21:11:43.764277  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0817 21:11:43.830682  211031 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0817 21:11:43.830713  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0817 21:11:43.845040  211031 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0817 21:11:43.845066  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0817 21:11:43.914871  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 21:11:43.923307  211031 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0817 21:11:43.923338  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0817 21:11:44.009692  211031 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0817 21:11:44.009723  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0817 21:11:44.026331  211031 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 21:11:44.026363  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0817 21:11:44.046408  211031 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0817 21:11:44.046437  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0817 21:11:44.084577  211031 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0817 21:11:44.084604  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0817 21:11:44.100444  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 21:11:44.128560  211031 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0817 21:11:44.128586  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0817 21:11:44.186329  211031 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0817 21:11:44.186357  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0817 21:11:44.211300  211031 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0817 21:11:44.211329  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0817 21:11:44.258721  211031 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0817 21:11:44.258754  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0817 21:11:44.274825  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0817 21:11:44.320324  211031 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0817 21:11:44.320354  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0817 21:11:44.392539  211031 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 21:11:44.392570  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0817 21:11:44.443681  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 21:11:47.705618  211031 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.605766347s)
	I0817 21:11:47.705615  211031 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.60556325s)
	I0817 21:11:47.705839  211031 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0817 21:11:47.706526  211031 node_ready.go:35] waiting up to 6m0s for node "addons-696435" to be "Ready" ...
	I0817 21:11:47.841607  211031 node_ready.go:49] node "addons-696435" has status "Ready":"True"
	I0817 21:11:47.841641  211031 node_ready.go:38] duration metric: took 135.092556ms waiting for node "addons-696435" to be "Ready" ...
	I0817 21:11:47.841652  211031 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:11:48.242313  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.933928517s)
	I0817 21:11:48.242390  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:48.242407  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:48.242815  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:48.242837  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:48.242874  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:48.242950  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:48.242973  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:48.243300  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:48.243338  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:48.243354  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:48.243382  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:48.243397  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:48.243687  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:48.243714  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:48.243734  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:48.268184  211031 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-5skcm" in "kube-system" namespace to be "Ready" ...
	I0817 21:11:49.964441  211031 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0817 21:11:49.964499  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:49.967787  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:49.968282  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:49.968323  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:49.968476  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:49.968810  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:49.969010  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:49.969176  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:50.181200  211031 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0817 21:11:50.226690  211031 addons.go:231] Setting addon gcp-auth=true in "addons-696435"
	I0817 21:11:50.226767  211031 host.go:66] Checking if "addons-696435" exists ...
	I0817 21:11:50.227254  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:50.227314  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:50.244101  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0817 21:11:50.244674  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:50.245381  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:50.245417  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:50.245786  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:50.246402  211031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:11:50.246441  211031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:11:50.263352  211031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0817 21:11:50.263901  211031 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:11:50.264487  211031 main.go:141] libmachine: Using API Version  1
	I0817 21:11:50.264516  211031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:11:50.264868  211031 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:11:50.265080  211031 main.go:141] libmachine: (addons-696435) Calling .GetState
	I0817 21:11:50.267220  211031 main.go:141] libmachine: (addons-696435) Calling .DriverName
	I0817 21:11:50.267919  211031 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0817 21:11:50.267962  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHHostname
	I0817 21:11:50.271251  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:50.271792  211031 main.go:141] libmachine: (addons-696435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:f5:12", ip: ""} in network mk-addons-696435: {Iface:virbr1 ExpiryTime:2023-08-17 22:11:00 +0000 UTC Type:0 Mac:52:54:00:9a:f5:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-696435 Clientid:01:52:54:00:9a:f5:12}
	I0817 21:11:50.271843  211031 main.go:141] libmachine: (addons-696435) DBG | domain addons-696435 has defined IP address 192.168.39.18 and MAC address 52:54:00:9a:f5:12 in network mk-addons-696435
	I0817 21:11:50.272102  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHPort
	I0817 21:11:50.272447  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHKeyPath
	I0817 21:11:50.272796  211031 main.go:141] libmachine: (addons-696435) Calling .GetSSHUsername
	I0817 21:11:50.273067  211031 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/addons-696435/id_rsa Username:docker}
	I0817 21:11:50.750815  211031 pod_ready.go:102] pod "coredns-5d78c9869d-5skcm" in "kube-system" namespace has status "Ready":"False"
	I0817 21:11:51.116538  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.807821947s)
	I0817 21:11:51.116611  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:51.116627  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.798593428s)
	I0817 21:11:51.116671  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:51.116688  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:51.116638  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:51.117056  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:51.117075  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:51.117085  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:51.117099  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:51.117106  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:51.117126  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:51.117136  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:51.117150  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:51.117171  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:51.117324  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:51.117338  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:51.117522  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:51.117541  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:51.117560  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.528535  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.204553752s)
	I0817 21:11:52.528609  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.528624  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.528622  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.194047361s)
	I0817 21:11:52.528667  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.528685  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.528704  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.949327687s)
	I0817 21:11:52.528740  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.528754  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.528761  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.936041785s)
	I0817 21:11:52.528785  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.528796  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.528830  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.613928815s)
	I0817 21:11:52.528851  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.528861  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.528948  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.428464447s)
	W0817 21:11:52.528978  211031 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0817 21:11:52.529015  211031 retry.go:31] will retry after 341.627206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0817 21:11:52.529093  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.254235917s)
	I0817 21:11:52.529111  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.529121  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.531085  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.531095  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.531107  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.531119  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.531133  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.531141  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.531149  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.531151  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.531163  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.531168  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.531178  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.531188  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.531198  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.531200  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.531117  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.531142  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.531239  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.531225  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.531256  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.531291  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.531303  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.531323  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.531316  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.531346  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.531367  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.531376  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.531382  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.531392  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:52.531400  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:52.531436  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.531452  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.531460  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.533616  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.533624  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.533673  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.533623  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.533694  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.533628  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.533721  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.533728  211031 addons.go:467] Verifying addon registry=true in "addons-696435"
	I0817 21:11:52.533647  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.536059  211031 out.go:177] * Verifying registry addon...
	I0817 21:11:52.533831  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.533661  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.533677  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:52.533675  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.533706  211031 addons.go:467] Verifying addon metrics-server=true in "addons-696435"
	I0817 21:11:52.533650  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:52.537680  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.537696  211031 addons.go:467] Verifying addon ingress=true in "addons-696435"
	I0817 21:11:52.539736  211031 out.go:177] * Verifying ingress addon...
	I0817 21:11:52.537796  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:52.538452  211031 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0817 21:11:52.542630  211031 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0817 21:11:52.563297  211031 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 21:11:52.563319  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:52.583619  211031 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0817 21:11:52.583646  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:52.590503  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:52.594729  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:52.871165  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 21:11:53.045710  211031 pod_ready.go:102] pod "coredns-5d78c9869d-5skcm" in "kube-system" namespace has status "Ready":"False"
	I0817 21:11:53.169762  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:53.191908  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:53.337694  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.893948137s)
	I0817 21:11:53.337760  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:53.337788  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:53.337784  211031 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.069828783s)
	I0817 21:11:53.340621  211031 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0817 21:11:53.338260  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:53.338292  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:53.342544  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:53.342583  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:53.342600  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:53.344628  211031 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0817 21:11:53.342960  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:53.343031  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:53.346677  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:53.346703  211031 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-696435"
	I0817 21:11:53.346744  211031 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0817 21:11:53.348415  211031 out.go:177] * Verifying csi-hostpath-driver addon...
	I0817 21:11:53.346788  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0817 21:11:53.351039  211031 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0817 21:11:53.445342  211031 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0817 21:11:53.445379  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:53.476143  211031 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0817 21:11:53.476178  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0817 21:11:53.522110  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:53.532491  211031 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 21:11:53.532516  211031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0817 21:11:53.584147  211031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 21:11:53.685993  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:53.699044  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:54.060844  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:54.120679  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:54.121566  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:54.569058  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:54.651993  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:54.654540  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:55.050261  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:55.103230  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:55.109254  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:55.467875  211031 pod_ready.go:102] pod "coredns-5d78c9869d-5skcm" in "kube-system" namespace has status "Ready":"False"
	I0817 21:11:55.626450  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:55.665158  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:55.665733  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:55.943439  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.072223505s)
	I0817 21:11:55.943503  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:55.943524  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:55.943853  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:55.943886  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:55.943900  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:55.943911  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:55.944194  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:55.944209  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:55.944224  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:56.039008  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:56.112961  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:56.127213  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:56.215310  211031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.631108966s)
	I0817 21:11:56.215379  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:56.215398  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:56.216074  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:56.216133  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:56.216154  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:56.216168  211031 main.go:141] libmachine: Making call to close driver server
	I0817 21:11:56.216191  211031 main.go:141] libmachine: (addons-696435) Calling .Close
	I0817 21:11:56.216571  211031 main.go:141] libmachine: (addons-696435) DBG | Closing plugin on server side
	I0817 21:11:56.216638  211031 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:11:56.216649  211031 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:11:56.218926  211031 addons.go:467] Verifying addon gcp-auth=true in "addons-696435"
	I0817 21:11:56.221182  211031 out.go:177] * Verifying gcp-auth addon...
	I0817 21:11:56.224530  211031 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0817 21:11:56.241247  211031 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0817 21:11:56.241277  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:56.262527  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:56.528888  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:56.602008  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:56.605247  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:56.767420  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:57.033692  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:57.100134  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:57.109878  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:57.277297  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:57.536245  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:57.604607  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:57.607207  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:57.771999  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:57.941628  211031 pod_ready.go:102] pod "coredns-5d78c9869d-5skcm" in "kube-system" namespace has status "Ready":"False"
	I0817 21:11:58.029533  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:58.096031  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:58.102147  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:58.266858  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:58.528626  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:58.596811  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:58.600285  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:58.767340  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:59.029173  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:59.096921  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:59.099778  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:59.266912  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:59.529250  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:59.597440  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:59.604168  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:59.780590  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:59.965340  211031 pod_ready.go:97] pod "coredns-5d78c9869d-5skcm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:11:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:11:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:11:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:11:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.18 PodIP: PodIPs:[] StartTime:2023-08-17 21:11:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-17 21:11:49 +0000 UTC,FinishedAt:2023-08-17 21:11:59 +0000 UTC,ContainerID:cri-o://528d52f17ed224d774434ad0048404e046f157a21b8bf6adf2e1b967b8193adb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://528d52f17ed224d774434ad0048404e046f157a21b8bf6adf2e1b967b8193adb Started:0xc001a7d580 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0817 21:11:59.965383  211031 pod_ready.go:81] duration metric: took 11.697139797s waiting for pod "coredns-5d78c9869d-5skcm" in "kube-system" namespace to be "Ready" ...
	E0817 21:11:59.965397  211031 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-5skcm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:11:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:11:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:11:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:11:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.18 PodIP: PodIPs:[] StartTime:2023-08-17 21:11:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-17 21:11:49 +0000 UTC,FinishedAt:2023-08-17 21:11:59 +0000 UTC,ContainerID:cri-o://528d52f17ed224d774434ad0048404e046f157a21b8bf6adf2e1b967b8193adb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://528d52f17ed224d774434ad0048404e046f157a21b8bf6adf2e1b967b8193adb Started:0xc001a7d580 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0817 21:11:59.965408  211031 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:00.029889  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:00.096154  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:00.099371  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:00.267487  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:00.527929  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:00.627581  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:00.630999  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:00.768062  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:01.030340  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:01.119111  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:01.119594  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:01.266991  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:01.531160  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:01.604372  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:01.606863  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:01.768042  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:01.997304  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:02.039449  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:02.096357  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:02.099903  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:02.269170  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:02.530593  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:02.596527  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:02.600055  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:02.766499  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:03.037019  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:03.100610  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:03.121981  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:03.272298  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:03.533922  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:03.609046  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:03.610788  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:03.778644  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:04.029381  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:04.115169  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:04.120303  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:04.277371  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:04.495716  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:04.534450  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:04.600213  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:04.601871  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:04.766908  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:05.029141  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:05.097762  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:05.102372  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:05.266644  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:05.528540  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:05.599877  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:05.601754  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:05.780152  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:06.038275  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:06.098877  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:06.101969  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:06.268841  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:06.529291  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:06.596675  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:06.610856  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:06.766855  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:07.003035  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:07.063882  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:07.096774  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:07.101594  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:07.272600  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:07.529002  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:07.596036  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:07.600698  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:07.767927  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:08.030817  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:08.096507  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:08.099881  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:08.267822  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:08.528004  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:08.596234  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:08.600542  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:08.767400  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:09.029006  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:09.096186  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:09.102492  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:09.268188  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:09.488348  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:09.528796  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:09.596280  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:09.599236  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:09.766494  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:10.082420  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:10.095352  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:10.104553  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:10.267897  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:10.528910  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:10.595996  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:10.599403  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:10.767330  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:11.028248  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:11.096711  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:11.102783  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:11.269827  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:11.528845  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:11.610603  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:11.613096  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:11.771192  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:11.986785  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:12.043470  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:12.108682  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:12.108886  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:12.270865  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:12.532596  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:12.597731  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:12.602774  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:12.770303  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:13.029550  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:13.097208  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:13.101020  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:13.268320  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:13.529960  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:13.596639  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:13.601560  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:13.767371  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:13.989325  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:14.046993  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:14.096346  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:14.099266  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:14.266987  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:14.528055  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:14.596338  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:14.601850  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:14.767008  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:15.030997  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:15.096064  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:15.104085  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:15.275136  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:15.531807  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:15.597998  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:15.599355  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:15.767218  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:16.034405  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:16.095557  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:16.099201  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:16.266454  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:16.487794  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:16.528971  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:16.596271  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:16.600441  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:16.766617  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:17.032350  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:17.096629  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:17.100512  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:17.267226  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:17.529923  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:17.596399  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:17.600337  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:17.767196  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:18.030521  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:18.095405  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:18.099366  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:18.266916  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:18.530070  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:18.596548  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:18.599935  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:18.767156  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:18.989904  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:19.038675  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:19.096200  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:19.100518  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:19.268270  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:19.528905  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:19.595424  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:19.600527  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:19.767170  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:20.029633  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:20.096346  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:20.101346  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:20.266728  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:20.528105  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:20.596523  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:20.600564  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:20.767141  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:20.995325  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:21.030579  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:21.098770  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:21.102796  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:21.267585  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:21.529206  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:21.596572  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:21.600463  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:21.766696  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:22.028887  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:22.097466  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:22.099797  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:22.267229  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:22.528254  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:22.596791  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:22.600729  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:22.767278  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:23.028527  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:23.377672  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:23.378205  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:23.378454  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:23.486751  211031 pod_ready.go:102] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:23.528012  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:23.598704  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:23.601466  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:23.767434  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:24.184915  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:24.186110  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:24.188712  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:24.267467  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:24.531239  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:24.595964  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:24.599461  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:24.793498  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:24.986966  211031 pod_ready.go:92] pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:24.986991  211031 pod_ready.go:81] duration metric: took 25.021574236s waiting for pod "coredns-5d78c9869d-x6x28" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:24.987001  211031 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-696435" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:24.994262  211031 pod_ready.go:92] pod "etcd-addons-696435" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:24.994291  211031 pod_ready.go:81] duration metric: took 7.283431ms waiting for pod "etcd-addons-696435" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:24.994305  211031 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-696435" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:25.005699  211031 pod_ready.go:92] pod "kube-apiserver-addons-696435" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:25.005732  211031 pod_ready.go:81] duration metric: took 11.417826ms waiting for pod "kube-apiserver-addons-696435" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:25.005747  211031 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-696435" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:25.012353  211031 pod_ready.go:92] pod "kube-controller-manager-addons-696435" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:25.012383  211031 pod_ready.go:81] duration metric: took 6.626962ms waiting for pod "kube-controller-manager-addons-696435" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:25.012397  211031 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xgd2l" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:25.019881  211031 pod_ready.go:92] pod "kube-proxy-xgd2l" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:25.019975  211031 pod_ready.go:81] duration metric: took 7.568462ms waiting for pod "kube-proxy-xgd2l" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:25.020011  211031 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-696435" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:25.033689  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:25.100047  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:25.111267  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:25.267367  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:25.383822  211031 pod_ready.go:92] pod "kube-scheduler-addons-696435" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:25.383843  211031 pod_ready.go:81] duration metric: took 363.797706ms waiting for pod "kube-scheduler-addons-696435" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:25.383851  211031 pod_ready.go:38] duration metric: took 37.542189763s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:12:25.383910  211031 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:12:25.383967  211031 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:12:25.405894  211031 api_server.go:72] duration metric: took 42.310141969s to wait for apiserver process to appear ...
	I0817 21:12:25.405919  211031 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:12:25.405941  211031 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0817 21:12:25.414583  211031 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0817 21:12:25.415932  211031 api_server.go:141] control plane version: v1.27.4
	I0817 21:12:25.415955  211031 api_server.go:131] duration metric: took 10.029848ms to wait for apiserver health ...
	I0817 21:12:25.415963  211031 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:12:25.529162  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:25.590864  211031 system_pods.go:59] 17 kube-system pods found
	I0817 21:12:25.590892  211031 system_pods.go:61] "coredns-5d78c9869d-x6x28" [542435b0-c08d-41b3-9af3-e974c321fe08] Running
	I0817 21:12:25.590898  211031 system_pods.go:61] "csi-hostpath-attacher-0" [e999c60a-759b-4261-8517-87003116dca0] Running
	I0817 21:12:25.590905  211031 system_pods.go:61] "csi-hostpath-resizer-0" [6c39fd53-9ccd-419e-aef0-b3c823987d41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0817 21:12:25.590914  211031 system_pods.go:61] "csi-hostpathplugin-jf5rs" [bc8af316-8909-4fdc-b1d6-968cf264393e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0817 21:12:25.590919  211031 system_pods.go:61] "etcd-addons-696435" [24836a19-7fb6-4210-a895-7e829582fdec] Running
	I0817 21:12:25.590924  211031 system_pods.go:61] "kube-apiserver-addons-696435" [5a1d5c61-f6c1-44c4-8e51-9cea3e1a3e91] Running
	I0817 21:12:25.590929  211031 system_pods.go:61] "kube-controller-manager-addons-696435" [e660cc03-a636-4409-89d0-6c9a4dbbc1fa] Running
	I0817 21:12:25.590934  211031 system_pods.go:61] "kube-ingress-dns-minikube" [6f2d1451-9a50-422e-a13f-6bce5aec47c5] Running
	I0817 21:12:25.590938  211031 system_pods.go:61] "kube-proxy-xgd2l" [83dc502c-f43f-4d13-8a2f-631df8694866] Running
	I0817 21:12:25.590942  211031 system_pods.go:61] "kube-scheduler-addons-696435" [f9db01cc-14a2-4845-9fd9-ef19cd653150] Running
	I0817 21:12:25.590948  211031 system_pods.go:61] "metrics-server-7746886d4f-bl9hm" [06ae83cd-c41c-4ed2-83d0-3670567bffaf] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 21:12:25.590954  211031 system_pods.go:61] "registry-9d6j4" [4c077f43-ad63-4dec-a59e-eb68f3db07da] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0817 21:12:25.590965  211031 system_pods.go:61] "registry-proxy-kk4lq" [5d6aa5a0-242c-4db5-834f-597e7cbe48df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 21:12:25.590974  211031 system_pods.go:61] "snapshot-controller-75bbb956b9-4cds5" [e2fd1f3e-2ec4-4c00-ab28-4809e66d4f05] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0817 21:12:25.590983  211031 system_pods.go:61] "snapshot-controller-75bbb956b9-7lnwq" [220f6781-606e-4ddf-9f73-cbbac8cb6f45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0817 21:12:25.590988  211031 system_pods.go:61] "storage-provisioner" [d45325f4-e19f-4ecf-9b8a-76e59a0327f6] Running
	I0817 21:12:25.590997  211031 system_pods.go:61] "tiller-deploy-6847666dc-j68kn" [40c6973c-a1b0-4038-9fa8-bedc28b207f1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0817 21:12:25.591003  211031 system_pods.go:74] duration metric: took 175.035606ms to wait for pod list to return data ...
	I0817 21:12:25.591013  211031 default_sa.go:34] waiting for default service account to be created ...
	I0817 21:12:25.594854  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:25.598422  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:25.767411  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:25.783320  211031 default_sa.go:45] found service account: "default"
	I0817 21:12:25.783356  211031 default_sa.go:55] duration metric: took 192.335701ms for default service account to be created ...
	I0817 21:12:25.783365  211031 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 21:12:25.989798  211031 system_pods.go:86] 17 kube-system pods found
	I0817 21:12:25.989838  211031 system_pods.go:89] "coredns-5d78c9869d-x6x28" [542435b0-c08d-41b3-9af3-e974c321fe08] Running
	I0817 21:12:25.989846  211031 system_pods.go:89] "csi-hostpath-attacher-0" [e999c60a-759b-4261-8517-87003116dca0] Running
	I0817 21:12:25.989857  211031 system_pods.go:89] "csi-hostpath-resizer-0" [6c39fd53-9ccd-419e-aef0-b3c823987d41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0817 21:12:25.989870  211031 system_pods.go:89] "csi-hostpathplugin-jf5rs" [bc8af316-8909-4fdc-b1d6-968cf264393e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0817 21:12:25.989878  211031 system_pods.go:89] "etcd-addons-696435" [24836a19-7fb6-4210-a895-7e829582fdec] Running
	I0817 21:12:25.989885  211031 system_pods.go:89] "kube-apiserver-addons-696435" [5a1d5c61-f6c1-44c4-8e51-9cea3e1a3e91] Running
	I0817 21:12:25.989893  211031 system_pods.go:89] "kube-controller-manager-addons-696435" [e660cc03-a636-4409-89d0-6c9a4dbbc1fa] Running
	I0817 21:12:25.989900  211031 system_pods.go:89] "kube-ingress-dns-minikube" [6f2d1451-9a50-422e-a13f-6bce5aec47c5] Running
	I0817 21:12:25.989906  211031 system_pods.go:89] "kube-proxy-xgd2l" [83dc502c-f43f-4d13-8a2f-631df8694866] Running
	I0817 21:12:25.989916  211031 system_pods.go:89] "kube-scheduler-addons-696435" [f9db01cc-14a2-4845-9fd9-ef19cd653150] Running
	I0817 21:12:25.989927  211031 system_pods.go:89] "metrics-server-7746886d4f-bl9hm" [06ae83cd-c41c-4ed2-83d0-3670567bffaf] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 21:12:25.989934  211031 system_pods.go:89] "registry-9d6j4" [4c077f43-ad63-4dec-a59e-eb68f3db07da] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0817 21:12:25.990018  211031 system_pods.go:89] "registry-proxy-kk4lq" [5d6aa5a0-242c-4db5-834f-597e7cbe48df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 21:12:25.990080  211031 system_pods.go:89] "snapshot-controller-75bbb956b9-4cds5" [e2fd1f3e-2ec4-4c00-ab28-4809e66d4f05] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0817 21:12:25.990099  211031 system_pods.go:89] "snapshot-controller-75bbb956b9-7lnwq" [220f6781-606e-4ddf-9f73-cbbac8cb6f45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0817 21:12:25.990108  211031 system_pods.go:89] "storage-provisioner" [d45325f4-e19f-4ecf-9b8a-76e59a0327f6] Running
	I0817 21:12:25.990122  211031 system_pods.go:89] "tiller-deploy-6847666dc-j68kn" [40c6973c-a1b0-4038-9fa8-bedc28b207f1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0817 21:12:25.990134  211031 system_pods.go:126] duration metric: took 206.760366ms to wait for k8s-apps to be running ...
	I0817 21:12:25.990144  211031 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:12:25.990201  211031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:12:26.006925  211031 system_svc.go:56] duration metric: took 16.76857ms WaitForService to wait for kubelet.
	I0817 21:12:26.006959  211031 kubeadm.go:581] duration metric: took 42.911210516s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:12:26.006988  211031 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:12:26.032163  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:26.096282  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:26.099436  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:26.183447  211031 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:12:26.183481  211031 node_conditions.go:123] node cpu capacity is 2
	I0817 21:12:26.183494  211031 node_conditions.go:105] duration metric: took 176.501372ms to run NodePressure ...
	I0817 21:12:26.183506  211031 start.go:228] waiting for startup goroutines ...
	I0817 21:12:26.267559  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:26.528402  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:26.597335  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:26.599420  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:26.766627  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:27.031123  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:27.096853  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:27.100614  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:27.266525  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:27.533185  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:27.596773  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:27.602411  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:27.766889  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:28.028512  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:28.096037  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:28.099990  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:28.268025  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:28.529235  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:28.597066  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:28.607247  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:28.767661  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:29.029947  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:29.100940  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:29.107457  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:29.277224  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:29.530147  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:29.600888  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:29.617222  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:29.781003  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:30.048474  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:30.104243  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:30.107205  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:30.273397  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:30.541013  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:30.645042  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:30.659072  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:30.773974  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:31.044950  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:31.095977  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:31.098918  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:31.269895  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:31.529763  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:31.595750  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:31.599009  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:31.782625  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:32.149816  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:32.152185  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:32.152388  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:32.266771  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:32.530820  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:32.597005  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:32.600659  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:32.777201  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:33.029158  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:33.098868  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:33.103103  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:33.267754  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:33.528126  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:33.596380  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:33.600814  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:34.092786  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:34.092952  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:34.103419  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:34.103478  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:34.267934  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:34.529243  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:34.596254  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:34.600000  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:34.768117  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:35.029561  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:35.096695  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:35.100520  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:35.268447  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:35.529074  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:35.600947  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:35.603457  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:35.768139  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:36.041357  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:36.107416  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:36.114352  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:36.266562  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:36.543122  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:36.597034  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:36.600743  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:36.767103  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:37.030048  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:37.097003  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:37.100706  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:37.267609  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:37.528814  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:37.597023  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:37.600107  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:37.766940  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:38.029455  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:38.111144  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:38.113099  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:38.269749  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:38.528532  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:38.597048  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:38.604324  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:38.766984  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:39.030985  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:39.096138  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:39.101822  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:39.266649  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:39.529080  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:39.603129  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:39.603309  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:39.766539  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:40.044150  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:40.096601  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:40.100707  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:40.267785  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:40.529050  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:40.600073  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:40.602519  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:40.766612  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:41.032871  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:41.151996  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:41.152402  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:41.269875  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:41.529119  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:41.597072  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:41.600320  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:41.767644  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:42.055017  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:42.117035  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:42.121001  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:42.269513  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:42.528234  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:42.599925  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:42.602725  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:42.767478  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:43.028976  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:43.096236  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:43.101939  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:43.266995  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:43.529578  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:43.599976  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:43.600470  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:43.769916  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:44.029530  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:44.104562  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:44.104703  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:44.267487  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:44.532882  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:44.598473  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:44.601016  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:44.767575  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:45.033759  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:45.095907  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:45.099107  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:45.268527  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:45.528307  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:45.596562  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:45.600021  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:45.767164  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:46.031037  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:46.096835  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:46.099699  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:46.282611  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:46.895265  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:46.895664  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:46.895841  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:46.895913  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:47.039096  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:47.107996  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:47.108736  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:47.268587  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:47.528123  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:47.596504  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:47.599713  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:47.766921  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:48.028942  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:48.096253  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:48.099472  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:48.266335  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:48.529327  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:48.596582  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:48.600070  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:48.767797  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:49.031538  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:49.109739  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:49.110571  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:49.269712  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:49.530268  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:49.601507  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:49.604242  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:49.766625  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:50.028917  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:50.098454  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:50.100478  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:50.267580  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:50.529011  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:50.596683  211031 kapi.go:107] duration metric: took 58.058222981s to wait for kubernetes.io/minikube-addons=registry ...
	I0817 21:12:50.618849  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:50.767353  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:51.037321  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:51.105143  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:51.269452  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:51.537756  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:51.602328  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:51.776228  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:52.042024  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:52.101366  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:52.277140  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:52.547372  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:52.601417  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:52.768987  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:53.030641  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:53.099848  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:53.267186  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:53.529855  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:53.604834  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:53.767345  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:54.030241  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:54.160947  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:54.267223  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:54.530522  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:54.602403  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:54.766879  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:55.029570  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:55.101731  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:55.269307  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:55.529084  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:55.600235  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:55.767482  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:56.029225  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:56.100979  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:56.461810  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:56.529275  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:56.600520  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:56.767586  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:57.032426  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:57.102597  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:57.267426  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:57.530126  211031 kapi.go:107] duration metric: took 1m4.179085173s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0817 21:12:57.599583  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:57.769043  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:58.100206  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:58.267109  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:58.600159  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:58.767074  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:59.099877  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:59.266728  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:59.599959  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:59.767705  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:00.101315  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:00.269192  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:00.599971  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:00.768060  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:01.101617  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:01.271842  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:01.601183  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:01.767996  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:02.102091  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:02.267205  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:02.600137  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:02.766702  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:03.100460  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:03.267501  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:03.600499  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:03.768082  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:04.408875  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:04.412165  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:04.600501  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:04.767169  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:05.101248  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:05.267180  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:05.600510  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:05.766852  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:06.101261  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:06.268791  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:06.600678  211031 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:06.774252  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:07.100403  211031 kapi.go:107] duration metric: took 1m14.557768787s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0817 21:13:07.267482  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:07.767236  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:08.271438  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:08.767388  211031 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:09.267614  211031 kapi.go:107] duration metric: took 1m13.043082763s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0817 21:13:09.269759  211031 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-696435 cluster.
	I0817 21:13:09.271302  211031 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0817 21:13:09.272832  211031 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0817 21:13:09.274438  211031 out.go:177] * Enabled addons: default-storageclass, cloud-spanner, storage-provisioner, inspektor-gadget, helm-tiller, metrics-server, ingress-dns, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0817 21:13:09.277138  211031 addons.go:502] enable addons completed in 1m26.550081452s: enabled=[default-storageclass cloud-spanner storage-provisioner inspektor-gadget helm-tiller metrics-server ingress-dns volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0817 21:13:09.277185  211031 start.go:233] waiting for cluster config update ...
	I0817 21:13:09.277203  211031 start.go:242] writing updated cluster config ...
	I0817 21:13:09.277518  211031 ssh_runner.go:195] Run: rm -f paused
	I0817 21:13:09.332558  211031 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 21:13:09.334549  211031 out.go:177] * Done! kubectl is now configured to use "addons-696435" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 21:10:57 UTC, ends at Thu 2023-08-17 21:15:52 UTC. --
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.861134698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251ab9aeaef4ebe821be915bf8a1118ed6906690f8ff93f0b4f58b0cbff93182,PodSandboxId:459fbc77f84a3618f9b265664ee09ecf0d471767fb8d0ba2e55a35d621835818,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692306943952471981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-xgwg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c781b79-05bb-4e59-b7cd-c3951e364132,},Annotations:map[string]string{io.kubernetes.container.hash: 4f456732,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e521182b132b49f1a8065bcb3b5bf3bca84d742da215328c18716bd82cb66e4,PodSandboxId:bdd8231f59831f49f232c4c422170fbfe39ff6a8f3339a046429528e5b55a386,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1692306811507735970,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5c78f74d8d-95cck,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f114a63f-e310-41e3-a494-52ccb195bac0,},An
notations:map[string]string{io.kubernetes.container.hash: 6f801a8e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ffd7ca32e4439a0b058708e430c974d133e7dda13616a059acbdfa94b32950,PodSandboxId:989fb358f7d69f6d7daa57e0f2b4c266b7877a5e391a8b9d4e77a1b870a82461,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692306803349776285,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 943e028b-6339-40cb-ba80-2824903afc67,},Annotations:map[string]string{io.kubernetes.container.hash: 5173ad62,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5,PodSandboxId:e2d80ed8ce1abd378faf13b9253641943741b951f80c001b67a90871ebd7a597,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1692306788066127078,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-2d5fm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2389756a-cabf-48db-b7b2-8740a315302e,},Annotations:map[string]string{io.kubernetes.container.hash: f1ec311,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f420d13ab79696675f4f19e26125f57cd115d03a19b8b25b80bf8caf70d7abfc,PodSandboxId:619047c758409dced0f0684792f98a4c4751140766bd786b8322f59706a0267e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757971902640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zxvk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0314053e-4f04-4060-8cc8-31680955d769,},Annotations:map[string]string{io.kubernetes.container.hash: 14400890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecff0a5e295001b969b6d920a4488f308fdc98d80b3b8506b97d4412649eaff,PodSandboxId:5206844f07bec32bc2fee5ff07df53c796590f286c86cc96cc8528110bbf0ae4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757834286245,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nwrf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6988bc4-26b6-4a75-bfef-0d9307e58c2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4f1114,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8,PodSandboxId:7faf47e0b01b86fe74fc7b922cc08f80fc6c8d1e22511a6e6149e1e45f1dc7cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692306719304453071,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45325f4-e19f-4ecf-9b8a-76e59a0327f6,},Annotations:map[string]string{io.kubernetes.container.hash: e25015a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7,PodSandboxId:7db123920e806c72f34c64eca21a22c89176973da3557b039e7137fad9473fee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da
9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692306710165181530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgd2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dc502c-f43f-4d13-8a2f-631df8694866,},Annotations:map[string]string{io.kubernetes.container.hash: bfceae38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc,PodSandboxId:12fd1391241439442e011fb71a417c3574eef61a43526732426e9bb9f51f436d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43
ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692306707064835673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-x6x28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542435b0-c08d-41b3-9af3-e974c321fe08,},Annotations:map[string]string{io.kubernetes.container.hash: 5509606,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd,PodSandboxId:aebf58c41eb08a08a90a4a365ab3c765939d1d07ec575a7e3ed723a0010227ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692306682456674534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18aee514717df299d16fe445701533a1,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2,PodSandboxId:9dcdcf4b6b6e0b29d079b1ae7e0a56c46ef0551fb5c7be267b1ab546b19cad4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd6
52c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692306682195689678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67223e3e9aa2d6f74d562a966a2a83d,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce983,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a,PodSandboxId:dee9637e4e9327ab38f1f39e32600a3f84a1229e4305463c47f712f0a4b746de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d146
0ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692306682034980439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72fec8d4a0cefbd063f3d9fca29f35d7,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb,PodSandboxId:af20c1f9ba3e01e5ec4f47d7dba1bf5a74e034b051e4c1ca0e513d47885b54d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed5
94932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692306681852675684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ac61ce9f5090ffe555bd2c39bc4f46,},Annotations:map[string]string{io.kubernetes.container.hash: aff47835,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9d8e88b8-211a-42db-bd07-2a1e62207002 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.896055667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ad3069df-46f4-4591-9e34-0f0fc950dd74 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.896158799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ad3069df-46f4-4591-9e34-0f0fc950dd74 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.896503733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251ab9aeaef4ebe821be915bf8a1118ed6906690f8ff93f0b4f58b0cbff93182,PodSandboxId:459fbc77f84a3618f9b265664ee09ecf0d471767fb8d0ba2e55a35d621835818,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692306943952471981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-xgwg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c781b79-05bb-4e59-b7cd-c3951e364132,},Annotations:map[string]string{io.kubernetes.container.hash: 4f456732,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e521182b132b49f1a8065bcb3b5bf3bca84d742da215328c18716bd82cb66e4,PodSandboxId:bdd8231f59831f49f232c4c422170fbfe39ff6a8f3339a046429528e5b55a386,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1692306811507735970,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5c78f74d8d-95cck,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f114a63f-e310-41e3-a494-52ccb195bac0,},An
notations:map[string]string{io.kubernetes.container.hash: 6f801a8e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ffd7ca32e4439a0b058708e430c974d133e7dda13616a059acbdfa94b32950,PodSandboxId:989fb358f7d69f6d7daa57e0f2b4c266b7877a5e391a8b9d4e77a1b870a82461,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692306803349776285,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 943e028b-6339-40cb-ba80-2824903afc67,},Annotations:map[string]string{io.kubernetes.container.hash: 5173ad62,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5,PodSandboxId:e2d80ed8ce1abd378faf13b9253641943741b951f80c001b67a90871ebd7a597,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1692306788066127078,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-2d5fm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2389756a-cabf-48db-b7b2-8740a315302e,},Annotations:map[string]string{io.kubernetes.container.hash: f1ec311,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f420d13ab79696675f4f19e26125f57cd115d03a19b8b25b80bf8caf70d7abfc,PodSandboxId:619047c758409dced0f0684792f98a4c4751140766bd786b8322f59706a0267e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757971902640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zxvk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0314053e-4f04-4060-8cc8-31680955d769,},Annotations:map[string]string{io.kubernetes.container.hash: 14400890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecff0a5e295001b969b6d920a4488f308fdc98d80b3b8506b97d4412649eaff,PodSandboxId:5206844f07bec32bc2fee5ff07df53c796590f286c86cc96cc8528110bbf0ae4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757834286245,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nwrf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6988bc4-26b6-4a75-bfef-0d9307e58c2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4f1114,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8,PodSandboxId:7faf47e0b01b86fe74fc7b922cc08f80fc6c8d1e22511a6e6149e1e45f1dc7cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692306719304453071,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45325f4-e19f-4ecf-9b8a-76e59a0327f6,},Annotations:map[string]string{io.kubernetes.container.hash: e25015a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7,PodSandboxId:7db123920e806c72f34c64eca21a22c89176973da3557b039e7137fad9473fee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da
9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692306710165181530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgd2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dc502c-f43f-4d13-8a2f-631df8694866,},Annotations:map[string]string{io.kubernetes.container.hash: bfceae38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc,PodSandboxId:12fd1391241439442e011fb71a417c3574eef61a43526732426e9bb9f51f436d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43
ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692306707064835673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-x6x28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542435b0-c08d-41b3-9af3-e974c321fe08,},Annotations:map[string]string{io.kubernetes.container.hash: 5509606,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd,PodSandboxId:aebf58c41eb08a08a90a4a365ab3c765939d1d07ec575a7e3ed723a0010227ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692306682456674534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18aee514717df299d16fe445701533a1,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2,PodSandboxId:9dcdcf4b6b6e0b29d079b1ae7e0a56c46ef0551fb5c7be267b1ab546b19cad4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd6
52c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692306682195689678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67223e3e9aa2d6f74d562a966a2a83d,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce983,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a,PodSandboxId:dee9637e4e9327ab38f1f39e32600a3f84a1229e4305463c47f712f0a4b746de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d146
0ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692306682034980439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72fec8d4a0cefbd063f3d9fca29f35d7,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb,PodSandboxId:af20c1f9ba3e01e5ec4f47d7dba1bf5a74e034b051e4c1ca0e513d47885b54d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed5
94932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692306681852675684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ac61ce9f5090ffe555bd2c39bc4f46,},Annotations:map[string]string{io.kubernetes.container.hash: aff47835,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ad3069df-46f4-4591-9e34-0f0fc950dd74 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.936102090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6c29c2f6-9edf-414e-baf1-0cd3de909aa5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.936259189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6c29c2f6-9edf-414e-baf1-0cd3de909aa5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.936630719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251ab9aeaef4ebe821be915bf8a1118ed6906690f8ff93f0b4f58b0cbff93182,PodSandboxId:459fbc77f84a3618f9b265664ee09ecf0d471767fb8d0ba2e55a35d621835818,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692306943952471981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-xgwg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c781b79-05bb-4e59-b7cd-c3951e364132,},Annotations:map[string]string{io.kubernetes.container.hash: 4f456732,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e521182b132b49f1a8065bcb3b5bf3bca84d742da215328c18716bd82cb66e4,PodSandboxId:bdd8231f59831f49f232c4c422170fbfe39ff6a8f3339a046429528e5b55a386,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1692306811507735970,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5c78f74d8d-95cck,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f114a63f-e310-41e3-a494-52ccb195bac0,},An
notations:map[string]string{io.kubernetes.container.hash: 6f801a8e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ffd7ca32e4439a0b058708e430c974d133e7dda13616a059acbdfa94b32950,PodSandboxId:989fb358f7d69f6d7daa57e0f2b4c266b7877a5e391a8b9d4e77a1b870a82461,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692306803349776285,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 943e028b-6339-40cb-ba80-2824903afc67,},Annotations:map[string]string{io.kubernetes.container.hash: 5173ad62,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5,PodSandboxId:e2d80ed8ce1abd378faf13b9253641943741b951f80c001b67a90871ebd7a597,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1692306788066127078,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-2d5fm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2389756a-cabf-48db-b7b2-8740a315302e,},Annotations:map[string]string{io.kubernetes.container.hash: f1ec311,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f420d13ab79696675f4f19e26125f57cd115d03a19b8b25b80bf8caf70d7abfc,PodSandboxId:619047c758409dced0f0684792f98a4c4751140766bd786b8322f59706a0267e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757971902640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zxvk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0314053e-4f04-4060-8cc8-31680955d769,},Annotations:map[string]string{io.kubernetes.container.hash: 14400890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecff0a5e295001b969b6d920a4488f308fdc98d80b3b8506b97d4412649eaff,PodSandboxId:5206844f07bec32bc2fee5ff07df53c796590f286c86cc96cc8528110bbf0ae4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757834286245,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nwrf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6988bc4-26b6-4a75-bfef-0d9307e58c2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4f1114,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8,PodSandboxId:7faf47e0b01b86fe74fc7b922cc08f80fc6c8d1e22511a6e6149e1e45f1dc7cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692306719304453071,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45325f4-e19f-4ecf-9b8a-76e59a0327f6,},Annotations:map[string]string{io.kubernetes.container.hash: e25015a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7,PodSandboxId:7db123920e806c72f34c64eca21a22c89176973da3557b039e7137fad9473fee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da
9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692306710165181530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgd2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dc502c-f43f-4d13-8a2f-631df8694866,},Annotations:map[string]string{io.kubernetes.container.hash: bfceae38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc,PodSandboxId:12fd1391241439442e011fb71a417c3574eef61a43526732426e9bb9f51f436d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43
ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692306707064835673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-x6x28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542435b0-c08d-41b3-9af3-e974c321fe08,},Annotations:map[string]string{io.kubernetes.container.hash: 5509606,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd,PodSandboxId:aebf58c41eb08a08a90a4a365ab3c765939d1d07ec575a7e3ed723a0010227ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692306682456674534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18aee514717df299d16fe445701533a1,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2,PodSandboxId:9dcdcf4b6b6e0b29d079b1ae7e0a56c46ef0551fb5c7be267b1ab546b19cad4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd6
52c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692306682195689678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67223e3e9aa2d6f74d562a966a2a83d,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce983,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a,PodSandboxId:dee9637e4e9327ab38f1f39e32600a3f84a1229e4305463c47f712f0a4b746de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d146
0ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692306682034980439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72fec8d4a0cefbd063f3d9fca29f35d7,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb,PodSandboxId:af20c1f9ba3e01e5ec4f47d7dba1bf5a74e034b051e4c1ca0e513d47885b54d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed5
94932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692306681852675684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ac61ce9f5090ffe555bd2c39bc4f46,},Annotations:map[string]string{io.kubernetes.container.hash: aff47835,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6c29c2f6-9edf-414e-baf1-0cd3de909aa5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.972907258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7762059c-c672-4537-b4a8-9eb12bc4c690 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.973001420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7762059c-c672-4537-b4a8-9eb12bc4c690 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:51 addons-696435 crio[712]: time="2023-08-17 21:15:51.973304256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251ab9aeaef4ebe821be915bf8a1118ed6906690f8ff93f0b4f58b0cbff93182,PodSandboxId:459fbc77f84a3618f9b265664ee09ecf0d471767fb8d0ba2e55a35d621835818,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692306943952471981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-xgwg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c781b79-05bb-4e59-b7cd-c3951e364132,},Annotations:map[string]string{io.kubernetes.container.hash: 4f456732,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e521182b132b49f1a8065bcb3b5bf3bca84d742da215328c18716bd82cb66e4,PodSandboxId:bdd8231f59831f49f232c4c422170fbfe39ff6a8f3339a046429528e5b55a386,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1692306811507735970,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5c78f74d8d-95cck,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f114a63f-e310-41e3-a494-52ccb195bac0,},An
notations:map[string]string{io.kubernetes.container.hash: 6f801a8e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ffd7ca32e4439a0b058708e430c974d133e7dda13616a059acbdfa94b32950,PodSandboxId:989fb358f7d69f6d7daa57e0f2b4c266b7877a5e391a8b9d4e77a1b870a82461,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692306803349776285,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 943e028b-6339-40cb-ba80-2824903afc67,},Annotations:map[string]string{io.kubernetes.container.hash: 5173ad62,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5,PodSandboxId:e2d80ed8ce1abd378faf13b9253641943741b951f80c001b67a90871ebd7a597,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1692306788066127078,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-2d5fm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2389756a-cabf-48db-b7b2-8740a315302e,},Annotations:map[string]string{io.kubernetes.container.hash: f1ec311,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f420d13ab79696675f4f19e26125f57cd115d03a19b8b25b80bf8caf70d7abfc,PodSandboxId:619047c758409dced0f0684792f98a4c4751140766bd786b8322f59706a0267e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757971902640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zxvk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0314053e-4f04-4060-8cc8-31680955d769,},Annotations:map[string]string{io.kubernetes.container.hash: 14400890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecff0a5e295001b969b6d920a4488f308fdc98d80b3b8506b97d4412649eaff,PodSandboxId:5206844f07bec32bc2fee5ff07df53c796590f286c86cc96cc8528110bbf0ae4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757834286245,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nwrf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6988bc4-26b6-4a75-bfef-0d9307e58c2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4f1114,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8,PodSandboxId:7faf47e0b01b86fe74fc7b922cc08f80fc6c8d1e22511a6e6149e1e45f1dc7cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692306719304453071,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45325f4-e19f-4ecf-9b8a-76e59a0327f6,},Annotations:map[string]string{io.kubernetes.container.hash: e25015a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7,PodSandboxId:7db123920e806c72f34c64eca21a22c89176973da3557b039e7137fad9473fee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da
9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692306710165181530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgd2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dc502c-f43f-4d13-8a2f-631df8694866,},Annotations:map[string]string{io.kubernetes.container.hash: bfceae38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc,PodSandboxId:12fd1391241439442e011fb71a417c3574eef61a43526732426e9bb9f51f436d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43
ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692306707064835673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-x6x28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542435b0-c08d-41b3-9af3-e974c321fe08,},Annotations:map[string]string{io.kubernetes.container.hash: 5509606,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd,PodSandboxId:aebf58c41eb08a08a90a4a365ab3c765939d1d07ec575a7e3ed723a0010227ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692306682456674534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18aee514717df299d16fe445701533a1,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2,PodSandboxId:9dcdcf4b6b6e0b29d079b1ae7e0a56c46ef0551fb5c7be267b1ab546b19cad4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd6
52c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692306682195689678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67223e3e9aa2d6f74d562a966a2a83d,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce983,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a,PodSandboxId:dee9637e4e9327ab38f1f39e32600a3f84a1229e4305463c47f712f0a4b746de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d146
0ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692306682034980439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72fec8d4a0cefbd063f3d9fca29f35d7,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb,PodSandboxId:af20c1f9ba3e01e5ec4f47d7dba1bf5a74e034b051e4c1ca0e513d47885b54d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed5
94932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692306681852675684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ac61ce9f5090ffe555bd2c39bc4f46,},Annotations:map[string]string{io.kubernetes.container.hash: aff47835,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7762059c-c672-4537-b4a8-9eb12bc4c690 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.007910770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=79a04c6c-6016-458b-8e77-1fba5a05fc0e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.008036508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=79a04c6c-6016-458b-8e77-1fba5a05fc0e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.008460538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251ab9aeaef4ebe821be915bf8a1118ed6906690f8ff93f0b4f58b0cbff93182,PodSandboxId:459fbc77f84a3618f9b265664ee09ecf0d471767fb8d0ba2e55a35d621835818,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692306943952471981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-xgwg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c781b79-05bb-4e59-b7cd-c3951e364132,},Annotations:map[string]string{io.kubernetes.container.hash: 4f456732,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e521182b132b49f1a8065bcb3b5bf3bca84d742da215328c18716bd82cb66e4,PodSandboxId:bdd8231f59831f49f232c4c422170fbfe39ff6a8f3339a046429528e5b55a386,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1692306811507735970,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5c78f74d8d-95cck,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f114a63f-e310-41e3-a494-52ccb195bac0,},An
notations:map[string]string{io.kubernetes.container.hash: 6f801a8e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ffd7ca32e4439a0b058708e430c974d133e7dda13616a059acbdfa94b32950,PodSandboxId:989fb358f7d69f6d7daa57e0f2b4c266b7877a5e391a8b9d4e77a1b870a82461,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692306803349776285,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 943e028b-6339-40cb-ba80-2824903afc67,},Annotations:map[string]string{io.kubernetes.container.hash: 5173ad62,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5,PodSandboxId:e2d80ed8ce1abd378faf13b9253641943741b951f80c001b67a90871ebd7a597,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1692306788066127078,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-2d5fm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2389756a-cabf-48db-b7b2-8740a315302e,},Annotations:map[string]string{io.kubernetes.container.hash: f1ec311,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f420d13ab79696675f4f19e26125f57cd115d03a19b8b25b80bf8caf70d7abfc,PodSandboxId:619047c758409dced0f0684792f98a4c4751140766bd786b8322f59706a0267e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757971902640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zxvk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0314053e-4f04-4060-8cc8-31680955d769,},Annotations:map[string]string{io.kubernetes.container.hash: 14400890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecff0a5e295001b969b6d920a4488f308fdc98d80b3b8506b97d4412649eaff,PodSandboxId:5206844f07bec32bc2fee5ff07df53c796590f286c86cc96cc8528110bbf0ae4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757834286245,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nwrf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6988bc4-26b6-4a75-bfef-0d9307e58c2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4f1114,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8,PodSandboxId:7faf47e0b01b86fe74fc7b922cc08f80fc6c8d1e22511a6e6149e1e45f1dc7cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692306719304453071,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45325f4-e19f-4ecf-9b8a-76e59a0327f6,},Annotations:map[string]string{io.kubernetes.container.hash: e25015a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7,PodSandboxId:7db123920e806c72f34c64eca21a22c89176973da3557b039e7137fad9473fee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da
9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692306710165181530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgd2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dc502c-f43f-4d13-8a2f-631df8694866,},Annotations:map[string]string{io.kubernetes.container.hash: bfceae38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc,PodSandboxId:12fd1391241439442e011fb71a417c3574eef61a43526732426e9bb9f51f436d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43
ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692306707064835673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-x6x28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542435b0-c08d-41b3-9af3-e974c321fe08,},Annotations:map[string]string{io.kubernetes.container.hash: 5509606,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd,PodSandboxId:aebf58c41eb08a08a90a4a365ab3c765939d1d07ec575a7e3ed723a0010227ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692306682456674534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18aee514717df299d16fe445701533a1,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2,PodSandboxId:9dcdcf4b6b6e0b29d079b1ae7e0a56c46ef0551fb5c7be267b1ab546b19cad4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd6
52c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692306682195689678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67223e3e9aa2d6f74d562a966a2a83d,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce983,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a,PodSandboxId:dee9637e4e9327ab38f1f39e32600a3f84a1229e4305463c47f712f0a4b746de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d146
0ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692306682034980439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72fec8d4a0cefbd063f3d9fca29f35d7,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb,PodSandboxId:af20c1f9ba3e01e5ec4f47d7dba1bf5a74e034b051e4c1ca0e513d47885b54d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed5
94932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692306681852675684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ac61ce9f5090ffe555bd2c39bc4f46,},Annotations:map[string]string{io.kubernetes.container.hash: aff47835,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=79a04c6c-6016-458b-8e77-1fba5a05fc0e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.047362831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c22a01de-422f-4b3f-9cf2-a38aaa7ba287 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.047465750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c22a01de-422f-4b3f-9cf2-a38aaa7ba287 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.047910429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251ab9aeaef4ebe821be915bf8a1118ed6906690f8ff93f0b4f58b0cbff93182,PodSandboxId:459fbc77f84a3618f9b265664ee09ecf0d471767fb8d0ba2e55a35d621835818,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692306943952471981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-xgwg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c781b79-05bb-4e59-b7cd-c3951e364132,},Annotations:map[string]string{io.kubernetes.container.hash: 4f456732,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e521182b132b49f1a8065bcb3b5bf3bca84d742da215328c18716bd82cb66e4,PodSandboxId:bdd8231f59831f49f232c4c422170fbfe39ff6a8f3339a046429528e5b55a386,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1692306811507735970,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5c78f74d8d-95cck,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f114a63f-e310-41e3-a494-52ccb195bac0,},An
notations:map[string]string{io.kubernetes.container.hash: 6f801a8e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ffd7ca32e4439a0b058708e430c974d133e7dda13616a059acbdfa94b32950,PodSandboxId:989fb358f7d69f6d7daa57e0f2b4c266b7877a5e391a8b9d4e77a1b870a82461,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692306803349776285,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 943e028b-6339-40cb-ba80-2824903afc67,},Annotations:map[string]string{io.kubernetes.container.hash: 5173ad62,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5,PodSandboxId:e2d80ed8ce1abd378faf13b9253641943741b951f80c001b67a90871ebd7a597,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1692306788066127078,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-2d5fm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2389756a-cabf-48db-b7b2-8740a315302e,},Annotations:map[string]string{io.kubernetes.container.hash: f1ec311,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f420d13ab79696675f4f19e26125f57cd115d03a19b8b25b80bf8caf70d7abfc,PodSandboxId:619047c758409dced0f0684792f98a4c4751140766bd786b8322f59706a0267e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757971902640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zxvk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0314053e-4f04-4060-8cc8-31680955d769,},Annotations:map[string]string{io.kubernetes.container.hash: 14400890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecff0a5e295001b969b6d920a4488f308fdc98d80b3b8506b97d4412649eaff,PodSandboxId:5206844f07bec32bc2fee5ff07df53c796590f286c86cc96cc8528110bbf0ae4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757834286245,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nwrf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6988bc4-26b6-4a75-bfef-0d9307e58c2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4f1114,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8,PodSandboxId:7faf47e0b01b86fe74fc7b922cc08f80fc6c8d1e22511a6e6149e1e45f1dc7cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692306719304453071,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45325f4-e19f-4ecf-9b8a-76e59a0327f6,},Annotations:map[string]string{io.kubernetes.container.hash: e25015a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7,PodSandboxId:7db123920e806c72f34c64eca21a22c89176973da3557b039e7137fad9473fee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da
9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692306710165181530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgd2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dc502c-f43f-4d13-8a2f-631df8694866,},Annotations:map[string]string{io.kubernetes.container.hash: bfceae38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc,PodSandboxId:12fd1391241439442e011fb71a417c3574eef61a43526732426e9bb9f51f436d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43
ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692306707064835673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-x6x28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542435b0-c08d-41b3-9af3-e974c321fe08,},Annotations:map[string]string{io.kubernetes.container.hash: 5509606,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd,PodSandboxId:aebf58c41eb08a08a90a4a365ab3c765939d1d07ec575a7e3ed723a0010227ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692306682456674534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18aee514717df299d16fe445701533a1,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2,PodSandboxId:9dcdcf4b6b6e0b29d079b1ae7e0a56c46ef0551fb5c7be267b1ab546b19cad4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd6
52c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692306682195689678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67223e3e9aa2d6f74d562a966a2a83d,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce983,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a,PodSandboxId:dee9637e4e9327ab38f1f39e32600a3f84a1229e4305463c47f712f0a4b746de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d146
0ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692306682034980439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72fec8d4a0cefbd063f3d9fca29f35d7,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb,PodSandboxId:af20c1f9ba3e01e5ec4f47d7dba1bf5a74e034b051e4c1ca0e513d47885b54d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed5
94932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692306681852675684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ac61ce9f5090ffe555bd2c39bc4f46,},Annotations:map[string]string{io.kubernetes.container.hash: aff47835,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c22a01de-422f-4b3f-9cf2-a38aaa7ba287 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.089011170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7cf91a7d-a0e3-4b19-83e7-2644e84a57eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.089177615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7cf91a7d-a0e3-4b19-83e7-2644e84a57eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.089593527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251ab9aeaef4ebe821be915bf8a1118ed6906690f8ff93f0b4f58b0cbff93182,PodSandboxId:459fbc77f84a3618f9b265664ee09ecf0d471767fb8d0ba2e55a35d621835818,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692306943952471981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-xgwg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c781b79-05bb-4e59-b7cd-c3951e364132,},Annotations:map[string]string{io.kubernetes.container.hash: 4f456732,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e521182b132b49f1a8065bcb3b5bf3bca84d742da215328c18716bd82cb66e4,PodSandboxId:bdd8231f59831f49f232c4c422170fbfe39ff6a8f3339a046429528e5b55a386,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1692306811507735970,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5c78f74d8d-95cck,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f114a63f-e310-41e3-a494-52ccb195bac0,},An
notations:map[string]string{io.kubernetes.container.hash: 6f801a8e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ffd7ca32e4439a0b058708e430c974d133e7dda13616a059acbdfa94b32950,PodSandboxId:989fb358f7d69f6d7daa57e0f2b4c266b7877a5e391a8b9d4e77a1b870a82461,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692306803349776285,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 943e028b-6339-40cb-ba80-2824903afc67,},Annotations:map[string]string{io.kubernetes.container.hash: 5173ad62,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5,PodSandboxId:e2d80ed8ce1abd378faf13b9253641943741b951f80c001b67a90871ebd7a597,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1692306788066127078,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-2d5fm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2389756a-cabf-48db-b7b2-8740a315302e,},Annotations:map[string]string{io.kubernetes.container.hash: f1ec311,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f420d13ab79696675f4f19e26125f57cd115d03a19b8b25b80bf8caf70d7abfc,PodSandboxId:619047c758409dced0f0684792f98a4c4751140766bd786b8322f59706a0267e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757971902640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zxvk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0314053e-4f04-4060-8cc8-31680955d769,},Annotations:map[string]string{io.kubernetes.container.hash: 14400890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecff0a5e295001b969b6d920a4488f308fdc98d80b3b8506b97d4412649eaff,PodSandboxId:5206844f07bec32bc2fee5ff07df53c796590f286c86cc96cc8528110bbf0ae4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757834286245,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nwrf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6988bc4-26b6-4a75-bfef-0d9307e58c2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4f1114,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8,PodSandboxId:7faf47e0b01b86fe74fc7b922cc08f80fc6c8d1e22511a6e6149e1e45f1dc7cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692306719304453071,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45325f4-e19f-4ecf-9b8a-76e59a0327f6,},Annotations:map[string]string{io.kubernetes.container.hash: e25015a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7,PodSandboxId:7db123920e806c72f34c64eca21a22c89176973da3557b039e7137fad9473fee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da
9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692306710165181530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgd2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dc502c-f43f-4d13-8a2f-631df8694866,},Annotations:map[string]string{io.kubernetes.container.hash: bfceae38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc,PodSandboxId:12fd1391241439442e011fb71a417c3574eef61a43526732426e9bb9f51f436d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43
ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692306707064835673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-x6x28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542435b0-c08d-41b3-9af3-e974c321fe08,},Annotations:map[string]string{io.kubernetes.container.hash: 5509606,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd,PodSandboxId:aebf58c41eb08a08a90a4a365ab3c765939d1d07ec575a7e3ed723a0010227ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692306682456674534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18aee514717df299d16fe445701533a1,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2,PodSandboxId:9dcdcf4b6b6e0b29d079b1ae7e0a56c46ef0551fb5c7be267b1ab546b19cad4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd6
52c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692306682195689678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67223e3e9aa2d6f74d562a966a2a83d,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce983,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a,PodSandboxId:dee9637e4e9327ab38f1f39e32600a3f84a1229e4305463c47f712f0a4b746de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d146
0ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692306682034980439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72fec8d4a0cefbd063f3d9fca29f35d7,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb,PodSandboxId:af20c1f9ba3e01e5ec4f47d7dba1bf5a74e034b051e4c1ca0e513d47885b54d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed5
94932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692306681852675684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ac61ce9f5090ffe555bd2c39bc4f46,},Annotations:map[string]string{io.kubernetes.container.hash: aff47835,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7cf91a7d-a0e3-4b19-83e7-2644e84a57eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.125574509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0dff0e59-e325-40fd-8704-ce1b4b293982 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.125670307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0dff0e59-e325-40fd-8704-ce1b4b293982 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.125953645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251ab9aeaef4ebe821be915bf8a1118ed6906690f8ff93f0b4f58b0cbff93182,PodSandboxId:459fbc77f84a3618f9b265664ee09ecf0d471767fb8d0ba2e55a35d621835818,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692306943952471981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-xgwg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c781b79-05bb-4e59-b7cd-c3951e364132,},Annotations:map[string]string{io.kubernetes.container.hash: 4f456732,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e521182b132b49f1a8065bcb3b5bf3bca84d742da215328c18716bd82cb66e4,PodSandboxId:bdd8231f59831f49f232c4c422170fbfe39ff6a8f3339a046429528e5b55a386,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1692306811507735970,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5c78f74d8d-95cck,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f114a63f-e310-41e3-a494-52ccb195bac0,},An
notations:map[string]string{io.kubernetes.container.hash: 6f801a8e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ffd7ca32e4439a0b058708e430c974d133e7dda13616a059acbdfa94b32950,PodSandboxId:989fb358f7d69f6d7daa57e0f2b4c266b7877a5e391a8b9d4e77a1b870a82461,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692306803349776285,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 943e028b-6339-40cb-ba80-2824903afc67,},Annotations:map[string]string{io.kubernetes.container.hash: 5173ad62,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5,PodSandboxId:e2d80ed8ce1abd378faf13b9253641943741b951f80c001b67a90871ebd7a597,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1692306788066127078,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-2d5fm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2389756a-cabf-48db-b7b2-8740a315302e,},Annotations:map[string]string{io.kubernetes.container.hash: f1ec311,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f420d13ab79696675f4f19e26125f57cd115d03a19b8b25b80bf8caf70d7abfc,PodSandboxId:619047c758409dced0f0684792f98a4c4751140766bd786b8322f59706a0267e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757971902640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zxvk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0314053e-4f04-4060-8cc8-31680955d769,},Annotations:map[string]string{io.kubernetes.container.hash: 14400890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecff0a5e295001b969b6d920a4488f308fdc98d80b3b8506b97d4412649eaff,PodSandboxId:5206844f07bec32bc2fee5ff07df53c796590f286c86cc96cc8528110bbf0ae4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757834286245,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nwrf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6988bc4-26b6-4a75-bfef-0d9307e58c2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4f1114,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8,PodSandboxId:7faf47e0b01b86fe74fc7b922cc08f80fc6c8d1e22511a6e6149e1e45f1dc7cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692306719304453071,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45325f4-e19f-4ecf-9b8a-76e59a0327f6,},Annotations:map[string]string{io.kubernetes.container.hash: e25015a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7,PodSandboxId:7db123920e806c72f34c64eca21a22c89176973da3557b039e7137fad9473fee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da
9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692306710165181530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgd2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dc502c-f43f-4d13-8a2f-631df8694866,},Annotations:map[string]string{io.kubernetes.container.hash: bfceae38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc,PodSandboxId:12fd1391241439442e011fb71a417c3574eef61a43526732426e9bb9f51f436d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43
ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692306707064835673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-x6x28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542435b0-c08d-41b3-9af3-e974c321fe08,},Annotations:map[string]string{io.kubernetes.container.hash: 5509606,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd,PodSandboxId:aebf58c41eb08a08a90a4a365ab3c765939d1d07ec575a7e3ed723a0010227ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692306682456674534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18aee514717df299d16fe445701533a1,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2,PodSandboxId:9dcdcf4b6b6e0b29d079b1ae7e0a56c46ef0551fb5c7be267b1ab546b19cad4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd6
52c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692306682195689678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67223e3e9aa2d6f74d562a966a2a83d,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce983,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a,PodSandboxId:dee9637e4e9327ab38f1f39e32600a3f84a1229e4305463c47f712f0a4b746de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d146
0ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692306682034980439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72fec8d4a0cefbd063f3d9fca29f35d7,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb,PodSandboxId:af20c1f9ba3e01e5ec4f47d7dba1bf5a74e034b051e4c1ca0e513d47885b54d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed5
94932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692306681852675684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ac61ce9f5090ffe555bd2c39bc4f46,},Annotations:map[string]string{io.kubernetes.container.hash: aff47835,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0dff0e59-e325-40fd-8704-ce1b4b293982 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.161317620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cec0e4bc-5afd-4bed-aa54-eed41e33bcd7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.161412242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cec0e4bc-5afd-4bed-aa54-eed41e33bcd7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:15:52 addons-696435 crio[712]: time="2023-08-17 21:15:52.161817524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251ab9aeaef4ebe821be915bf8a1118ed6906690f8ff93f0b4f58b0cbff93182,PodSandboxId:459fbc77f84a3618f9b265664ee09ecf0d471767fb8d0ba2e55a35d621835818,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692306943952471981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-xgwg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c781b79-05bb-4e59-b7cd-c3951e364132,},Annotations:map[string]string{io.kubernetes.container.hash: 4f456732,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e521182b132b49f1a8065bcb3b5bf3bca84d742da215328c18716bd82cb66e4,PodSandboxId:bdd8231f59831f49f232c4c422170fbfe39ff6a8f3339a046429528e5b55a386,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1692306811507735970,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5c78f74d8d-95cck,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f114a63f-e310-41e3-a494-52ccb195bac0,},An
notations:map[string]string{io.kubernetes.container.hash: 6f801a8e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ffd7ca32e4439a0b058708e430c974d133e7dda13616a059acbdfa94b32950,PodSandboxId:989fb358f7d69f6d7daa57e0f2b4c266b7877a5e391a8b9d4e77a1b870a82461,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692306803349776285,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 943e028b-6339-40cb-ba80-2824903afc67,},Annotations:map[string]string{io.kubernetes.container.hash: 5173ad62,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5,PodSandboxId:e2d80ed8ce1abd378faf13b9253641943741b951f80c001b67a90871ebd7a597,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1692306788066127078,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-2d5fm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2389756a-cabf-48db-b7b2-8740a315302e,},Annotations:map[string]string{io.kubernetes.container.hash: f1ec311,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f420d13ab79696675f4f19e26125f57cd115d03a19b8b25b80bf8caf70d7abfc,PodSandboxId:619047c758409dced0f0684792f98a4c4751140766bd786b8322f59706a0267e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757971902640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zxvk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0314053e-4f04-4060-8cc8-31680955d769,},Annotations:map[string]string{io.kubernetes.container.hash: 14400890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecff0a5e295001b969b6d920a4488f308fdc98d80b3b8506b97d4412649eaff,PodSandboxId:5206844f07bec32bc2fee5ff07df53c796590f286c86cc96cc8528110bbf0ae4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1692306757834286245,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nwrf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6988bc4-26b6-4a75-bfef-0d9307e58c2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4f1114,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8,PodSandboxId:7faf47e0b01b86fe74fc7b922cc08f80fc6c8d1e22511a6e6149e1e45f1dc7cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692306719304453071,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45325f4-e19f-4ecf-9b8a-76e59a0327f6,},Annotations:map[string]string{io.kubernetes.container.hash: e25015a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7,PodSandboxId:7db123920e806c72f34c64eca21a22c89176973da3557b039e7137fad9473fee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da
9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692306710165181530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgd2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dc502c-f43f-4d13-8a2f-631df8694866,},Annotations:map[string]string{io.kubernetes.container.hash: bfceae38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc,PodSandboxId:12fd1391241439442e011fb71a417c3574eef61a43526732426e9bb9f51f436d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43
ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692306707064835673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-x6x28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542435b0-c08d-41b3-9af3-e974c321fe08,},Annotations:map[string]string{io.kubernetes.container.hash: 5509606,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd,PodSandboxId:aebf58c41eb08a08a90a4a365ab3c765939d1d07ec575a7e3ed723a0010227ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692306682456674534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18aee514717df299d16fe445701533a1,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2,PodSandboxId:9dcdcf4b6b6e0b29d079b1ae7e0a56c46ef0551fb5c7be267b1ab546b19cad4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd6
52c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692306682195689678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67223e3e9aa2d6f74d562a966a2a83d,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce983,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a,PodSandboxId:dee9637e4e9327ab38f1f39e32600a3f84a1229e4305463c47f712f0a4b746de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d146
0ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692306682034980439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72fec8d4a0cefbd063f3d9fca29f35d7,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb,PodSandboxId:af20c1f9ba3e01e5ec4f47d7dba1bf5a74e034b051e4c1ca0e513d47885b54d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed5
94932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692306681852675684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-696435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ac61ce9f5090ffe555bd2c39bc4f46,},Annotations:map[string]string{io.kubernetes.container.hash: aff47835,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cec0e4bc-5afd-4bed-aa54-eed41e33bcd7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID
	251ab9aeaef4e       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      8 seconds ago       Running             hello-world-app           0                   459fbc77f84a3
	3e521182b132b       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                        2 minutes ago       Running             headlamp                  0                   bdd8231f59831
	29ffd7ca32e44       docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a                              2 minutes ago       Running             nginx                     0                   989fb358f7d69
	dc8290b54b295       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   e2d80ed8ce1ab
	f420d13ab7969       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   619047c758409
	9ecff0a5e2950       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   5206844f07bec
	cd6a5bc627839       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   7faf47e0b01b8
	efa40a05b8278       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4                                                             4 minutes ago       Running             kube-proxy                0                   7db123920e806
	55d57a3230700       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   12fd139124143
	5012e5444de1e       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16                                                             4 minutes ago       Running             kube-scheduler            0                   aebf58c41eb08
	4e4b28aa85aec       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   9dcdcf4b6b6e0
	2c333a20e91e3       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5                                                             4 minutes ago       Running             kube-controller-manager   0                   dee9637e4e932
	d9c01caa565c6       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c                                                             4 minutes ago       Running             kube-apiserver            0                   af20c1f9ba3e0
	
	* 
	* ==> coredns [55d57a32307000fdd7afa10b39a926ec2d356dff782f85fd0e569cc31a21ffdc] <==
	* [INFO] 10.244.0.7:45504 - 15201 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.002164721s
	[INFO] 10.244.0.7:53894 - 49757 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000123801s
	[INFO] 10.244.0.7:53894 - 61520 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000623204s
	[INFO] 10.244.0.7:42295 - 47097 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00014386s
	[INFO] 10.244.0.7:42295 - 26871 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117761s
	[INFO] 10.244.0.7:46638 - 40842 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000140652s
	[INFO] 10.244.0.7:46638 - 44168 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071354s
	[INFO] 10.244.0.7:59109 - 4008 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122169s
	[INFO] 10.244.0.7:59109 - 19877 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000040206s
	[INFO] 10.244.0.7:51873 - 29126 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007193s
	[INFO] 10.244.0.7:51873 - 20419 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112833s
	[INFO] 10.244.0.7:48791 - 53798 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048129s
	[INFO] 10.244.0.7:48791 - 32296 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031148s
	[INFO] 10.244.0.7:36985 - 33597 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00007258s
	[INFO] 10.244.0.7:36985 - 34111 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00002398s
	[INFO] 10.244.0.19:56226 - 29526 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000436186s
	[INFO] 10.244.0.19:59705 - 50553 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000331092s
	[INFO] 10.244.0.19:46478 - 23807 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134016s
	[INFO] 10.244.0.19:60143 - 64046 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000333522s
	[INFO] 10.244.0.19:59998 - 21528 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00006348s
	[INFO] 10.244.0.19:39471 - 52286 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000058242s
	[INFO] 10.244.0.19:50333 - 28324 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000804625s
	[INFO] 10.244.0.19:44551 - 57890 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000382287s
	[INFO] 10.244.0.22:53957 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000986049s
	[INFO] 10.244.0.22:38758 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000200571s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-696435
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-696435
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=addons-696435
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_11_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-696435
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:11:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-696435
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:15:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:14:04 +0000   Thu, 17 Aug 2023 21:11:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:14:04 +0000   Thu, 17 Aug 2023 21:11:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:14:04 +0000   Thu, 17 Aug 2023 21:11:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:14:04 +0000   Thu, 17 Aug 2023 21:11:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    addons-696435
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac996715a61247759d93087f87535bd4
	  System UUID:                ac996715-a612-4775-9d93-087f87535bd4
	  Boot ID:                    d9b6caf5-57a5-4ac7-8707-a0112b8664d9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-xgwg9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-58478865f7-2d5fm                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  headlamp                    headlamp-5c78f74d8d-95cck                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 coredns-5d78c9869d-x6x28                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m10s
	  kube-system                 etcd-addons-696435                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-apiserver-addons-696435             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-controller-manager-addons-696435    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-proxy-xgd2l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-scheduler-addons-696435             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  Starting                 4m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m32s (x8 over 4m32s)  kubelet          Node addons-696435 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s (x8 over 4m32s)  kubelet          Node addons-696435 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s (x7 over 4m32s)  kubelet          Node addons-696435 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s                  kubelet          Node addons-696435 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s                  kubelet          Node addons-696435 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s                  kubelet          Node addons-696435 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m21s                  kubelet          Node addons-696435 status is now: NodeReady
	  Normal  RegisteredNode           4m11s                  node-controller  Node addons-696435 event: Registered Node addons-696435 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.101254] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.402391] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.580631] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.157762] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug17 21:11] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.311422] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.117218] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.143258] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.098833] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.210873] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[  +8.650499] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[  +9.799371] systemd-fstab-generator[1243]: Ignoring "noauto" for root device
	[ +27.056537] kauditd_printk_skb: 54 callbacks suppressed
	[Aug17 21:12] kauditd_printk_skb: 28 callbacks suppressed
	[ +33.497736] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.242765] kauditd_printk_skb: 4 callbacks suppressed
	[Aug17 21:13] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.899875] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.279015] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.238078] kauditd_printk_skb: 14 callbacks suppressed
	[Aug17 21:14] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [4e4b28aa85aec0c9d8e37212cdf3cdf9f4ad39e0645e3c6a9835f07fc1cd13b2] <==
	* {"level":"info","ts":"2023-08-17T21:13:04.405Z","caller":"traceutil/trace.go:171","msg":"trace[561303407] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1068; }","duration":"310.136968ms","start":"2023-08-17T21:13:04.095Z","end":"2023-08-17T21:13:04.405Z","steps":["trace[561303407] 'agreement among raft nodes before linearized reading'  (duration: 308.896888ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:13:04.405Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T21:13:04.095Z","time spent":"310.284026ms","remote":"127.0.0.1:55448","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13864,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2023-08-17T21:13:17.543Z","caller":"traceutil/trace.go:171","msg":"trace[736286203] linearizableReadLoop","detail":"{readStateIndex:1200; appliedIndex:1199; }","duration":"208.505335ms","start":"2023-08-17T21:13:17.335Z","end":"2023-08-17T21:13:17.543Z","steps":["trace[736286203] 'read index received'  (duration: 208.2722ms)","trace[736286203] 'applied index is now lower than readState.Index'  (duration: 232.667µs)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T21:13:17.544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.300514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2023-08-17T21:13:17.544Z","caller":"traceutil/trace.go:171","msg":"trace[253539399] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1161; }","duration":"209.48738ms","start":"2023-08-17T21:13:17.335Z","end":"2023-08-17T21:13:17.544Z","steps":["trace[253539399] 'agreement among raft nodes before linearized reading'  (duration: 209.147687ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:13:17.545Z","caller":"traceutil/trace.go:171","msg":"trace[1936194610] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"234.838892ms","start":"2023-08-17T21:13:17.310Z","end":"2023-08-17T21:13:17.545Z","steps":["trace[1936194610] 'process raft request'  (duration: 233.272363ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:13:17.545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.390879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/default/nginx-ingress\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T21:13:17.545Z","caller":"traceutil/trace.go:171","msg":"trace[94285676] range","detail":"{range_begin:/registry/ingress/default/nginx-ingress; range_end:; response_count:0; response_revision:1161; }","duration":"188.677152ms","start":"2023-08-17T21:13:17.357Z","end":"2023-08-17T21:13:17.545Z","steps":["trace[94285676] 'agreement among raft nodes before linearized reading'  (duration: 188.359613ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:13:17.546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.392048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81196"}
	{"level":"info","ts":"2023-08-17T21:13:17.546Z","caller":"traceutil/trace.go:171","msg":"trace[712841448] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1161; }","duration":"128.49092ms","start":"2023-08-17T21:13:17.417Z","end":"2023-08-17T21:13:17.546Z","steps":["trace[712841448] 'agreement among raft nodes before linearized reading'  (duration: 128.268659ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:13:17.687Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.753569ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/\" range_end:\"/registry/pods/gadget0\" ","response":"range_response_count:1 size:7521"}
	{"level":"info","ts":"2023-08-17T21:13:17.687Z","caller":"traceutil/trace.go:171","msg":"trace[1994426403] range","detail":"{range_begin:/registry/pods/gadget/; range_end:/registry/pods/gadget0; response_count:1; response_revision:1161; }","duration":"319.906297ms","start":"2023-08-17T21:13:17.367Z","end":"2023-08-17T21:13:17.687Z","steps":["trace[1994426403] 'agreement among raft nodes before linearized reading'  (duration: 179.045992ms)","trace[1994426403] 'range keys from in-memory index tree'  (duration: 140.508102ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T21:13:17.687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T21:13:17.367Z","time spent":"319.972664ms","remote":"127.0.0.1:55448","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":7544,"request content":"key:\"/registry/pods/gadget/\" range_end:\"/registry/pods/gadget0\" "}
	{"level":"info","ts":"2023-08-17T21:13:17.688Z","caller":"traceutil/trace.go:171","msg":"trace[1442767925] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"116.875257ms","start":"2023-08-17T21:13:17.571Z","end":"2023-08-17T21:13:17.688Z","steps":["trace[1442767925] 'process raft request'  (duration: 113.609309ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:13:17.688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.146748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/default/nginx-ingress\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T21:13:17.689Z","caller":"traceutil/trace.go:171","msg":"trace[1166371053] range","detail":"{range_begin:/registry/ingress/default/nginx-ingress; range_end:; response_count:0; response_revision:1161; }","duration":"124.164001ms","start":"2023-08-17T21:13:17.564Z","end":"2023-08-17T21:13:17.689Z","steps":["trace[1166371053] 'range keys from in-memory index tree'  (duration: 123.034522ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:13:22.180Z","caller":"traceutil/trace.go:171","msg":"trace[217505798] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"119.949184ms","start":"2023-08-17T21:13:22.060Z","end":"2023-08-17T21:13:22.180Z","steps":["trace[217505798] 'process raft request'  (duration: 119.36825ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:13:22.199Z","caller":"traceutil/trace.go:171","msg":"trace[1947860273] transaction","detail":"{read_only:false; response_revision:1209; number_of_response:1; }","duration":"138.009318ms","start":"2023-08-17T21:13:22.061Z","end":"2023-08-17T21:13:22.199Z","steps":["trace[1947860273] 'process raft request'  (duration: 137.888932ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:13:29.703Z","caller":"traceutil/trace.go:171","msg":"trace[1358461377] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"330.490204ms","start":"2023-08-17T21:13:29.372Z","end":"2023-08-17T21:13:29.703Z","steps":["trace[1358461377] 'process raft request'  (duration: 330.403305ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:13:29.703Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T21:13:29.372Z","time spent":"330.615001ms","remote":"127.0.0.1:55448","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3959,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/registry-proxy-kk4lq\" mod_revision:1310 > success:<request_put:<key:\"/registry/pods/kube-system/registry-proxy-kk4lq\" value_size:3904 >> failure:<request_range:<key:\"/registry/pods/kube-system/registry-proxy-kk4lq\" > >"}
	{"level":"info","ts":"2023-08-17T21:13:37.125Z","caller":"traceutil/trace.go:171","msg":"trace[1375123283] transaction","detail":"{read_only:false; response_revision:1348; number_of_response:1; }","duration":"472.292232ms","start":"2023-08-17T21:13:36.652Z","end":"2023-08-17T21:13:37.125Z","steps":["trace[1375123283] 'process raft request'  (duration: 471.988272ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:13:37.125Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T21:13:36.652Z","time spent":"472.459961ms","remote":"127.0.0.1:55470","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1318 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2023-08-17T21:13:40.160Z","caller":"traceutil/trace.go:171","msg":"trace[1797792392] transaction","detail":"{read_only:false; response_revision:1353; number_of_response:1; }","duration":"303.8122ms","start":"2023-08-17T21:13:39.856Z","end":"2023-08-17T21:13:40.160Z","steps":["trace[1797792392] 'process raft request'  (duration: 303.50021ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:13:40.160Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T21:13:39.856Z","time spent":"304.011675ms","remote":"127.0.0.1:55444","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1350 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-08-17T21:13:45.732Z","caller":"traceutil/trace.go:171","msg":"trace[2045889067] transaction","detail":"{read_only:false; response_revision:1364; number_of_response:1; }","duration":"129.683098ms","start":"2023-08-17T21:13:45.603Z","end":"2023-08-17T21:13:45.732Z","steps":["trace[2045889067] 'process raft request'  (duration: 129.544864ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [dc8290b54b29521f88fd1c751d3f637479c063fd449c2a338b2b0c7e2928b5a5] <==
	* 2023/08/17 21:13:08 GCP Auth Webhook started!
	2023/08/17 21:13:14 Ready to marshal response ...
	2023/08/17 21:13:14 Ready to write response ...
	2023/08/17 21:13:18 Ready to marshal response ...
	2023/08/17 21:13:18 Ready to write response ...
	2023/08/17 21:13:19 Ready to marshal response ...
	2023/08/17 21:13:19 Ready to write response ...
	2023/08/17 21:13:25 Ready to marshal response ...
	2023/08/17 21:13:25 Ready to write response ...
	2023/08/17 21:13:25 Ready to marshal response ...
	2023/08/17 21:13:25 Ready to write response ...
	2023/08/17 21:13:25 Ready to marshal response ...
	2023/08/17 21:13:25 Ready to write response ...
	2023/08/17 21:13:34 Ready to marshal response ...
	2023/08/17 21:13:34 Ready to write response ...
	2023/08/17 21:14:09 Ready to marshal response ...
	2023/08/17 21:14:09 Ready to write response ...
	2023/08/17 21:15:41 Ready to marshal response ...
	2023/08/17 21:15:41 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:15:52 up 5 min,  0 users,  load average: 1.27, 2.15, 1.09
	Linux addons-696435 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d9c01caa565c6911b6dd6603f031010ec533acd76a4aa55c3b17273e8a5f73bb] <==
	* I0817 21:14:27.522001       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:14:27.522285       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:14:27.544826       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:14:27.544907       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:14:27.568498       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:14:27.568749       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:14:27.628163       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:14:27.628250       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:14:27.658833       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:14:27.658922       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:14:27.672305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:14:27.672391       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:14:27.704734       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:14:27.704845       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:14:27.716911       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:14:27.716969       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0817 21:14:28.629199       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0817 21:14:28.717457       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0817 21:14:28.746942       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0817 21:14:31.973747       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0817 21:14:31.973838       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 21:14:31.973902       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 21:14:31.973928       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 21:15:41.387088       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.104.233.36]
	
	* 
	* ==> kube-controller-manager [2c333a20e91e30461ff4456e19fccdb7ddabb6123915f562a180819f10346f9a] <==
	* I0817 21:14:42.572299       1 shared_informer.go:318] Caches are synced for garbage collector
	W0817 21:14:44.498030       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:14:44.498061       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:14:44.913241       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:14:44.913345       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:14:48.485267       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:14:48.485382       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:15:04.305633       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:04.305719       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:15:07.228201       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:07.228282       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:15:07.356244       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:07.356298       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:15:09.857858       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:09.857986       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:15:35.418691       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:35.418834       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:15:40.191933       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:40.192129       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0817 21:15:41.090107       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0817 21:15:41.147659       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-xgwg9"
	I0817 21:15:44.266632       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0817 21:15:44.301834       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0817 21:15:46.536013       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:46.536079       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [efa40a05b8278cc0279aa6ac7812678e9dd786544ffd517af40be464a73b9fb7] <==
	* I0817 21:11:57.698425       1 node.go:141] Successfully retrieved node IP: 192.168.39.18
	I0817 21:11:57.699004       1 server_others.go:110] "Detected node IP" address="192.168.39.18"
	I0817 21:11:57.699145       1 server_others.go:554] "Using iptables proxy"
	I0817 21:11:58.553843       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0817 21:11:58.553910       1 server_others.go:192] "Using iptables Proxier"
	I0817 21:11:58.553944       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 21:11:58.576171       1 server.go:658] "Version info" version="v1.27.4"
	I0817 21:11:58.576221       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 21:11:58.588242       1 config.go:188] "Starting service config controller"
	I0817 21:11:58.588300       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 21:11:58.588324       1 config.go:97] "Starting endpoint slice config controller"
	I0817 21:11:58.588327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 21:11:58.613485       1 config.go:315] "Starting node config controller"
	I0817 21:11:58.613614       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 21:11:58.712970       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 21:11:58.713104       1 shared_informer.go:318] Caches are synced for service config
	I0817 21:11:58.817455       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [5012e5444de1e988c952fead88d1fef92fac6ca95a35aaadc1e8286ce0887afd] <==
	* W0817 21:11:26.942287       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 21:11:26.944384       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0817 21:11:27.769849       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 21:11:27.769906       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0817 21:11:27.807580       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 21:11:27.807660       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0817 21:11:27.821990       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 21:11:27.822081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0817 21:11:27.941726       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:11:27.941817       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0817 21:11:27.952136       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 21:11:27.952221       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0817 21:11:27.964151       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 21:11:27.964210       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0817 21:11:28.007692       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 21:11:28.007743       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0817 21:11:28.089871       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 21:11:28.089927       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0817 21:11:28.144743       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 21:11:28.144797       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0817 21:11:28.239013       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 21:11:28.239159       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 21:11:28.295641       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 21:11:28.295763       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0817 21:11:30.722149       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 21:10:57 UTC, ends at Thu 2023-08-17 21:15:52 UTC. --
	Aug 17 21:15:41 addons-696435 kubelet[1250]: I0817 21:15:41.216902    1250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4x5\" (UniqueName: \"kubernetes.io/projected/6c781b79-05bb-4e59-b7cd-c3951e364132-kube-api-access-nx4x5\") pod \"hello-world-app-65bdb79f98-xgwg9\" (UID: \"6c781b79-05bb-4e59-b7cd-c3951e364132\") " pod="default/hello-world-app-65bdb79f98-xgwg9"
	Aug 17 21:15:41 addons-696435 kubelet[1250]: I0817 21:15:41.216979    1250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6c781b79-05bb-4e59-b7cd-c3951e364132-gcp-creds\") pod \"hello-world-app-65bdb79f98-xgwg9\" (UID: \"6c781b79-05bb-4e59-b7cd-c3951e364132\") " pod="default/hello-world-app-65bdb79f98-xgwg9"
	Aug 17 21:15:42 addons-696435 kubelet[1250]: I0817 21:15:42.528501    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lftmp\" (UniqueName: \"kubernetes.io/projected/6f2d1451-9a50-422e-a13f-6bce5aec47c5-kube-api-access-lftmp\") pod \"6f2d1451-9a50-422e-a13f-6bce5aec47c5\" (UID: \"6f2d1451-9a50-422e-a13f-6bce5aec47c5\") "
	Aug 17 21:15:42 addons-696435 kubelet[1250]: I0817 21:15:42.532162    1250 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f2d1451-9a50-422e-a13f-6bce5aec47c5-kube-api-access-lftmp" (OuterVolumeSpecName: "kube-api-access-lftmp") pod "6f2d1451-9a50-422e-a13f-6bce5aec47c5" (UID: "6f2d1451-9a50-422e-a13f-6bce5aec47c5"). InnerVolumeSpecName "kube-api-access-lftmp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 17 21:15:42 addons-696435 kubelet[1250]: I0817 21:15:42.629494    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lftmp\" (UniqueName: \"kubernetes.io/projected/6f2d1451-9a50-422e-a13f-6bce5aec47c5-kube-api-access-lftmp\") on node \"addons-696435\" DevicePath \"\""
	Aug 17 21:15:43 addons-696435 kubelet[1250]: I0817 21:15:43.274346    1250 scope.go:115] "RemoveContainer" containerID="6fa1858da5dd2c2859fa8a231a4f620346db9750c4f16cd6da06d067d52af7fb"
	Aug 17 21:15:43 addons-696435 kubelet[1250]: I0817 21:15:43.345214    1250 scope.go:115] "RemoveContainer" containerID="6fa1858da5dd2c2859fa8a231a4f620346db9750c4f16cd6da06d067d52af7fb"
	Aug 17 21:15:43 addons-696435 kubelet[1250]: E0817 21:15:43.346267    1250 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa1858da5dd2c2859fa8a231a4f620346db9750c4f16cd6da06d067d52af7fb\": container with ID starting with 6fa1858da5dd2c2859fa8a231a4f620346db9750c4f16cd6da06d067d52af7fb not found: ID does not exist" containerID="6fa1858da5dd2c2859fa8a231a4f620346db9750c4f16cd6da06d067d52af7fb"
	Aug 17 21:15:43 addons-696435 kubelet[1250]: I0817 21:15:43.346324    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:6fa1858da5dd2c2859fa8a231a4f620346db9750c4f16cd6da06d067d52af7fb} err="failed to get container status \"6fa1858da5dd2c2859fa8a231a4f620346db9750c4f16cd6da06d067d52af7fb\": rpc error: code = NotFound desc = could not find container \"6fa1858da5dd2c2859fa8a231a4f620346db9750c4f16cd6da06d067d52af7fb\": container with ID starting with 6fa1858da5dd2c2859fa8a231a4f620346db9750c4f16cd6da06d067d52af7fb not found: ID does not exist"
	Aug 17 21:15:44 addons-696435 kubelet[1250]: I0817 21:15:44.344227    1250 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-xgwg9" podStartSLOduration=1.785090018 podCreationTimestamp="2023-08-17 21:15:41 +0000 UTC" firstStartedPulling="2023-08-17 21:15:42.366815025 +0000 UTC m=+252.030699037" lastFinishedPulling="2023-08-17 21:15:43.92588269 +0000 UTC m=+253.589766703" observedRunningTime="2023-08-17 21:15:44.335471201 +0000 UTC m=+253.999355233" watchObservedRunningTime="2023-08-17 21:15:44.344157684 +0000 UTC m=+254.008041707"
	Aug 17 21:15:44 addons-696435 kubelet[1250]: E0817 21:15:44.345861    1250 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-8hkmt.177c48331c3eae3c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-8hkmt", UID:"3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba", APIVersion:"v1", ResourceVersion:"671", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-696435"}, FirstTimestamp:time.Date(2023, time.August, 17, 21, 15, 44, 338865724, time.Local), LastTimestamp:time.Date(2023, time.August, 17, 21, 15, 44, 338865724, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-8hkmt.177c48331c3eae3c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 17 21:15:44 addons-696435 kubelet[1250]: I0817 21:15:44.567181    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=0314053e-4f04-4060-8cc8-31680955d769 path="/var/lib/kubelet/pods/0314053e-4f04-4060-8cc8-31680955d769/volumes"
	Aug 17 21:15:44 addons-696435 kubelet[1250]: I0817 21:15:44.567781    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=6f2d1451-9a50-422e-a13f-6bce5aec47c5 path="/var/lib/kubelet/pods/6f2d1451-9a50-422e-a13f-6bce5aec47c5/volumes"
	Aug 17 21:15:44 addons-696435 kubelet[1250]: I0817 21:15:44.568142    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=e6988bc4-26b6-4a75-bfef-0d9307e58c2e path="/var/lib/kubelet/pods/e6988bc4-26b6-4a75-bfef-0d9307e58c2e/volumes"
	Aug 17 21:15:45 addons-696435 kubelet[1250]: I0817 21:15:45.679663    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba-webhook-cert\") pod \"3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba\" (UID: \"3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba\") "
	Aug 17 21:15:45 addons-696435 kubelet[1250]: I0817 21:15:45.679719    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgjqp\" (UniqueName: \"kubernetes.io/projected/3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba-kube-api-access-fgjqp\") pod \"3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba\" (UID: \"3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba\") "
	Aug 17 21:15:45 addons-696435 kubelet[1250]: I0817 21:15:45.684472    1250 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba-kube-api-access-fgjqp" (OuterVolumeSpecName: "kube-api-access-fgjqp") pod "3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba" (UID: "3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba"). InnerVolumeSpecName "kube-api-access-fgjqp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 17 21:15:45 addons-696435 kubelet[1250]: I0817 21:15:45.686131    1250 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba" (UID: "3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:15:45 addons-696435 kubelet[1250]: I0817 21:15:45.780393    1250 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba-webhook-cert\") on node \"addons-696435\" DevicePath \"\""
	Aug 17 21:15:45 addons-696435 kubelet[1250]: I0817 21:15:45.780431    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fgjqp\" (UniqueName: \"kubernetes.io/projected/3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba-kube-api-access-fgjqp\") on node \"addons-696435\" DevicePath \"\""
	Aug 17 21:15:46 addons-696435 kubelet[1250]: I0817 21:15:46.309409    1250 scope.go:115] "RemoveContainer" containerID="1ea5a4888ba3a49a3b7018b11e1149b18a557b4cc28abdd465f62abf98a1308b"
	Aug 17 21:15:46 addons-696435 kubelet[1250]: I0817 21:15:46.347441    1250 scope.go:115] "RemoveContainer" containerID="1ea5a4888ba3a49a3b7018b11e1149b18a557b4cc28abdd465f62abf98a1308b"
	Aug 17 21:15:46 addons-696435 kubelet[1250]: E0817 21:15:46.348689    1250 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea5a4888ba3a49a3b7018b11e1149b18a557b4cc28abdd465f62abf98a1308b\": container with ID starting with 1ea5a4888ba3a49a3b7018b11e1149b18a557b4cc28abdd465f62abf98a1308b not found: ID does not exist" containerID="1ea5a4888ba3a49a3b7018b11e1149b18a557b4cc28abdd465f62abf98a1308b"
	Aug 17 21:15:46 addons-696435 kubelet[1250]: I0817 21:15:46.348763    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:1ea5a4888ba3a49a3b7018b11e1149b18a557b4cc28abdd465f62abf98a1308b} err="failed to get container status \"1ea5a4888ba3a49a3b7018b11e1149b18a557b4cc28abdd465f62abf98a1308b\": rpc error: code = NotFound desc = could not find container \"1ea5a4888ba3a49a3b7018b11e1149b18a557b4cc28abdd465f62abf98a1308b\": container with ID starting with 1ea5a4888ba3a49a3b7018b11e1149b18a557b4cc28abdd465f62abf98a1308b not found: ID does not exist"
	Aug 17 21:15:46 addons-696435 kubelet[1250]: I0817 21:15:46.567664    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba path="/var/lib/kubelet/pods/3e8a0cb1-7b80-412f-acf3-8b03fd22f0ba/volumes"
	
	* 
	* ==> storage-provisioner [cd6a5bc6278398394432e5d63bca2cd63ad1314307e9fbf35846ed40d2c3d9f8] <==
	* I0817 21:12:01.291393       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:12:01.339707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:12:01.340190       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 21:12:01.369476       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 21:12:01.370005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-696435_7479d134-e19b-4d85-a95a-9d8282d1d89f!
	I0817 21:12:01.385068       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"26748ccd-706a-4168-b2c3-d5046bece66e", APIVersion:"v1", ResourceVersion:"817", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-696435_7479d134-e19b-4d85-a95a-9d8282d1d89f became leader
	I0817 21:12:01.772306       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-696435_7479d134-e19b-4d85-a95a-9d8282d1d89f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-696435 -n addons-696435
helpers_test.go:261: (dbg) Run:  kubectl --context addons-696435 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (157.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-696435
addons_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-696435: exit status 82 (2m1.534511689s)

                                                
                                                
-- stdout --
	* Stopping node "addons-696435"  ...
	* Stopping node "addons-696435"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:150: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-696435" : exit status 82
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-696435
addons_test.go:152: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-696435: exit status 11 (21.551257723s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-696435" : exit status 11
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-696435
addons_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-696435: exit status 11 (6.144125202s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-696435" : exit status 11
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-696435
addons_test.go:161: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-696435: exit status 11 (6.141096606s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-696435" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 logs --file /tmp/TestFunctionalserialLogsFileCmd2613090995/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 logs --file /tmp/TestFunctionalserialLogsFileCmd2613090995/001/logs.txt: (1.492341798s)
functional_test.go:1251: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:22:00.704376  216273 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/16865-203458/.minikube/logs/lastStart.txt: bufio.Scanner: token too long
	E0817 21:22:01.383679  216273 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 a77aa1f7acded9468c18897dcdfd89b296eadd96eb1edd4779fdccaef8103606" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 a77aa1f7acded9468c18897dcdfd89b296eadd96eb1edd4779fdccaef8103606": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:22:01Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-5d78c9869d-pcjr7_de85957e-bb2a-432f-b83e-a0b299b7bd38/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-5d78c9869d-pcjr7_de85957e-bb2a-432f-b83e-a0b299b7bd38: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-08-17T21:22:01Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_coredns-5d78c9869d-pcjr7_de85957e-bb2a-432f-b83e-a0b299b7bd38/coredns/0.log\\\": lstat /var/log/pods/kube-system_coredns-5d78c9869d-pcjr7_de85957e-bb2a-432f-b83e-a0b299b7bd38: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: coredns [a77aa1f7acded9468c18897dcdfd89b296eadd96eb1edd4779fdccaef8103606]

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (168.08s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-449686 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-449686 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.078005379s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-449686 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-449686 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3de40a2f-a52b-425b-923e-ec0ead15b733] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3de40a2f-a52b-425b-923e-ec0ead15b733] Running
E0817 21:25:53.189191  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.020100395s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-449686 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0817 21:27:07.552839  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:07.558158  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:07.568466  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:07.588802  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:07.629208  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:07.709577  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:07.869956  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:08.190520  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:08.831502  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:10.111807  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:12.672807  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:17.793585  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:28.034151  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:27:48.514859  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-449686 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.658152963s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-449686 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-449686 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.250
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-449686 addons disable ingress-dns --alsologtostderr -v=1
E0817 21:28:09.344867  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-449686 addons disable ingress-dns --alsologtostderr -v=1: (6.713613293s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-449686 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-449686 addons disable ingress --alsologtostderr -v=1: (7.583065884s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-449686 -n ingress-addon-legacy-449686
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-449686 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-449686 logs -n 25: (1.143203401s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-540012 image ls                                                | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	| image          | functional-540012 image save                                              | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-540012                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-540012 image rm                                                | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-540012                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-540012 image ls                                                | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	| update-context | functional-540012                                                         | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-540012                                                         | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-540012 image load                                              | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| update-context | functional-540012                                                         | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-540012 image ls                                                | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	| image          | functional-540012 image save --daemon                                     | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-540012                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-540012                                                         | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-540012                                                         | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-540012 ssh pgrep                                               | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-540012                                                         | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-540012 image build -t                                          | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:23 UTC |
	|                | localhost/my-image:functional-540012                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-540012                                                         | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:22 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-540012 image ls                                                | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:23 UTC | 17 Aug 23 21:23 UTC |
	| delete         | -p functional-540012                                                      | functional-540012           | jenkins | v1.31.2 | 17 Aug 23 21:23 UTC | 17 Aug 23 21:23 UTC |
	| start          | -p ingress-addon-legacy-449686                                            | ingress-addon-legacy-449686 | jenkins | v1.31.2 | 17 Aug 23 21:23 UTC | 17 Aug 23 21:25 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-449686                                               | ingress-addon-legacy-449686 | jenkins | v1.31.2 | 17 Aug 23 21:25 UTC | 17 Aug 23 21:25 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-449686                                               | ingress-addon-legacy-449686 | jenkins | v1.31.2 | 17 Aug 23 21:25 UTC | 17 Aug 23 21:25 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-449686                                               | ingress-addon-legacy-449686 | jenkins | v1.31.2 | 17 Aug 23 21:25 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-449686 ip                                            | ingress-addon-legacy-449686 | jenkins | v1.31.2 | 17 Aug 23 21:28 UTC | 17 Aug 23 21:28 UTC |
	| addons         | ingress-addon-legacy-449686                                               | ingress-addon-legacy-449686 | jenkins | v1.31.2 | 17 Aug 23 21:28 UTC | 17 Aug 23 21:28 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-449686                                               | ingress-addon-legacy-449686 | jenkins | v1.31.2 | 17 Aug 23 21:28 UTC | 17 Aug 23 21:28 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:23:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:23:22.738525  219168 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:23:22.738700  219168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:23:22.738710  219168 out.go:309] Setting ErrFile to fd 2...
	I0817 21:23:22.738717  219168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:23:22.738933  219168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 21:23:22.739596  219168 out.go:303] Setting JSON to false
	I0817 21:23:22.740481  219168 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21928,"bootTime":1692285475,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:23:22.740545  219168 start.go:138] virtualization: kvm guest
	I0817 21:23:22.743157  219168 out.go:177] * [ingress-addon-legacy-449686] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:23:22.745073  219168 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:23:22.745080  219168 notify.go:220] Checking for updates...
	I0817 21:23:22.746724  219168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:23:22.748297  219168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:23:22.749789  219168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:23:22.751576  219168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:23:22.753696  219168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:23:22.755451  219168 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:23:22.791180  219168 out.go:177] * Using the kvm2 driver based on user configuration
	I0817 21:23:22.792911  219168 start.go:298] selected driver: kvm2
	I0817 21:23:22.792928  219168 start.go:902] validating driver "kvm2" against <nil>
	I0817 21:23:22.792940  219168 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:23:22.793725  219168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:23:22.793819  219168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 21:23:22.809451  219168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 21:23:22.809510  219168 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:23:22.809779  219168 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:23:22.809816  219168 cni.go:84] Creating CNI manager for ""
	I0817 21:23:22.809826  219168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 21:23:22.809837  219168 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0817 21:23:22.809844  219168 start_flags.go:319] config:
	{Name:ingress-addon-legacy-449686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-449686 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:23:22.809981  219168 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:23:22.811997  219168 out.go:177] * Starting control plane node ingress-addon-legacy-449686 in cluster ingress-addon-legacy-449686
	I0817 21:23:22.813525  219168 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0817 21:23:22.840792  219168 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0817 21:23:22.840825  219168 cache.go:57] Caching tarball of preloaded images
	I0817 21:23:22.840978  219168 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0817 21:23:22.843098  219168 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0817 21:23:22.844874  219168 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:23:22.876238  219168 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0817 21:23:26.629456  219168 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:23:26.629558  219168 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:23:27.581545  219168 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0817 21:23:27.581915  219168 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/config.json ...
	I0817 21:23:27.581950  219168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/config.json: {Name:mkc088ac8747d67f54b683c2ed506ee2b3353bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:23:27.582160  219168 start.go:365] acquiring machines lock for ingress-addon-legacy-449686: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 21:23:27.582196  219168 start.go:369] acquired machines lock for "ingress-addon-legacy-449686" in 20.198µs
	I0817 21:23:27.582215  219168 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-449686 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.2
0 ClusterName:ingress-addon-legacy-449686 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:23:27.582312  219168 start.go:125] createHost starting for "" (driver="kvm2")
	I0817 21:23:27.584947  219168 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0817 21:23:27.585131  219168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:23:27.585198  219168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:23:27.599700  219168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33141
	I0817 21:23:27.600231  219168 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:23:27.600906  219168 main.go:141] libmachine: Using API Version  1
	I0817 21:23:27.600928  219168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:23:27.601300  219168 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:23:27.601489  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetMachineName
	I0817 21:23:27.601675  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:23:27.601830  219168 start.go:159] libmachine.API.Create for "ingress-addon-legacy-449686" (driver="kvm2")
	I0817 21:23:27.601863  219168 client.go:168] LocalClient.Create starting
	I0817 21:23:27.601891  219168 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem
	I0817 21:23:27.601923  219168 main.go:141] libmachine: Decoding PEM data...
	I0817 21:23:27.601940  219168 main.go:141] libmachine: Parsing certificate...
	I0817 21:23:27.602019  219168 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem
	I0817 21:23:27.602037  219168 main.go:141] libmachine: Decoding PEM data...
	I0817 21:23:27.602048  219168 main.go:141] libmachine: Parsing certificate...
	I0817 21:23:27.602120  219168 main.go:141] libmachine: Running pre-create checks...
	I0817 21:23:27.602138  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .PreCreateCheck
	I0817 21:23:27.602564  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetConfigRaw
	I0817 21:23:27.603091  219168 main.go:141] libmachine: Creating machine...
	I0817 21:23:27.603113  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .Create
	I0817 21:23:27.603280  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Creating KVM machine...
	I0817 21:23:27.604465  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found existing default KVM network
	I0817 21:23:27.605155  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:27.605021  219202 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f110}
	I0817 21:23:27.610745  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | trying to create private KVM network mk-ingress-addon-legacy-449686 192.168.39.0/24...
	I0817 21:23:27.684198  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | private KVM network mk-ingress-addon-legacy-449686 192.168.39.0/24 created
	I0817 21:23:27.684239  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Setting up store path in /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686 ...
	I0817 21:23:27.684257  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:27.684153  219202 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:23:27.684277  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Building disk image from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0817 21:23:27.684294  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Downloading /home/jenkins/minikube-integration/16865-203458/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0817 21:23:27.918660  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:27.918428  219202 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa...
	I0817 21:23:28.028615  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:28.028426  219202 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/ingress-addon-legacy-449686.rawdisk...
	I0817 21:23:28.028663  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Writing magic tar header
	I0817 21:23:28.028686  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Writing SSH key tar header
	I0817 21:23:28.028695  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:28.028548  219202 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686 ...
	I0817 21:23:28.028711  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686
	I0817 21:23:28.028725  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines
	I0817 21:23:28.028740  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686 (perms=drwx------)
	I0817 21:23:28.028755  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:23:28.028772  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines (perms=drwxr-xr-x)
	I0817 21:23:28.028792  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube (perms=drwxr-xr-x)
	I0817 21:23:28.028799  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458 (perms=drwxrwxr-x)
	I0817 21:23:28.028812  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0817 21:23:28.028824  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458
	I0817 21:23:28.028840  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0817 21:23:28.028865  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0817 21:23:28.028883  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Checking permissions on dir: /home/jenkins
	I0817 21:23:28.028890  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Checking permissions on dir: /home
	I0817 21:23:28.028902  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Skipping /home - not owner
	I0817 21:23:28.028911  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Creating domain...
	I0817 21:23:28.029879  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) define libvirt domain using xml: 
	I0817 21:23:28.029903  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) <domain type='kvm'>
	I0817 21:23:28.029918  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   <name>ingress-addon-legacy-449686</name>
	I0817 21:23:28.029924  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   <memory unit='MiB'>4096</memory>
	I0817 21:23:28.029934  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   <vcpu>2</vcpu>
	I0817 21:23:28.029944  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   <features>
	I0817 21:23:28.029956  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <acpi/>
	I0817 21:23:28.029965  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <apic/>
	I0817 21:23:28.029972  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <pae/>
	I0817 21:23:28.029982  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     
	I0817 21:23:28.029991  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   </features>
	I0817 21:23:28.029996  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   <cpu mode='host-passthrough'>
	I0817 21:23:28.030003  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   
	I0817 21:23:28.030012  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   </cpu>
	I0817 21:23:28.030024  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   <os>
	I0817 21:23:28.030041  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <type>hvm</type>
	I0817 21:23:28.030119  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <boot dev='cdrom'/>
	I0817 21:23:28.030147  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <boot dev='hd'/>
	I0817 21:23:28.030155  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <bootmenu enable='no'/>
	I0817 21:23:28.030169  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   </os>
	I0817 21:23:28.030188  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   <devices>
	I0817 21:23:28.030202  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <disk type='file' device='cdrom'>
	I0817 21:23:28.030221  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/boot2docker.iso'/>
	I0817 21:23:28.030232  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <target dev='hdc' bus='scsi'/>
	I0817 21:23:28.030238  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <readonly/>
	I0817 21:23:28.030252  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     </disk>
	I0817 21:23:28.030267  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <disk type='file' device='disk'>
	I0817 21:23:28.030282  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0817 21:23:28.030303  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/ingress-addon-legacy-449686.rawdisk'/>
	I0817 21:23:28.030317  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <target dev='hda' bus='virtio'/>
	I0817 21:23:28.030328  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     </disk>
	I0817 21:23:28.030337  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <interface type='network'>
	I0817 21:23:28.030346  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <source network='mk-ingress-addon-legacy-449686'/>
	I0817 21:23:28.030357  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <model type='virtio'/>
	I0817 21:23:28.030368  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     </interface>
	I0817 21:23:28.030378  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <interface type='network'>
	I0817 21:23:28.030389  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <source network='default'/>
	I0817 21:23:28.030398  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <model type='virtio'/>
	I0817 21:23:28.030410  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     </interface>
	I0817 21:23:28.030420  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <serial type='pty'>
	I0817 21:23:28.030457  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <target port='0'/>
	I0817 21:23:28.030497  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     </serial>
	I0817 21:23:28.030514  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <console type='pty'>
	I0817 21:23:28.030533  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <target type='serial' port='0'/>
	I0817 21:23:28.030548  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     </console>
	I0817 21:23:28.030561  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     <rng model='virtio'>
	I0817 21:23:28.030577  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)       <backend model='random'>/dev/random</backend>
	I0817 21:23:28.030590  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     </rng>
	I0817 21:23:28.030614  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     
	I0817 21:23:28.030635  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)     
	I0817 21:23:28.030661  219168 main.go:141] libmachine: (ingress-addon-legacy-449686)   </devices>
	I0817 21:23:28.030684  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) </domain>
	I0817 21:23:28.030701  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) 
	I0817 21:23:28.034951  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:fa:aa:95 in network default
	I0817 21:23:28.035559  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Ensuring networks are active...
	I0817 21:23:28.035596  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:28.036381  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Ensuring network default is active
	I0817 21:23:28.036687  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Ensuring network mk-ingress-addon-legacy-449686 is active
	I0817 21:23:28.037312  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Getting domain xml...
	I0817 21:23:28.038046  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Creating domain...
	I0817 21:23:29.291871  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Waiting to get IP...
	I0817 21:23:29.292806  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:29.293242  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:29.293311  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:29.293237  219202 retry.go:31] will retry after 260.554757ms: waiting for machine to come up
	I0817 21:23:29.555726  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:29.556347  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:29.556382  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:29.556281  219202 retry.go:31] will retry after 372.775848ms: waiting for machine to come up
	I0817 21:23:29.931223  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:29.931664  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:29.931697  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:29.931622  219202 retry.go:31] will retry after 424.087908ms: waiting for machine to come up
	I0817 21:23:30.357342  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:30.357744  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:30.357770  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:30.357698  219202 retry.go:31] will retry after 605.042664ms: waiting for machine to come up
	I0817 21:23:30.964438  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:30.964824  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:30.964860  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:30.964765  219202 retry.go:31] will retry after 651.296281ms: waiting for machine to come up
	I0817 21:23:31.617534  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:31.617917  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:31.617950  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:31.617870  219202 retry.go:31] will retry after 636.60478ms: waiting for machine to come up
	I0817 21:23:32.255744  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:32.256183  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:32.256219  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:32.256111  219202 retry.go:31] will retry after 920.14609ms: waiting for machine to come up
	I0817 21:23:33.178439  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:33.179203  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:33.179238  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:33.179147  219202 retry.go:31] will retry after 1.436581521s: waiting for machine to come up
	I0817 21:23:34.617770  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:34.618194  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:34.618225  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:34.618145  219202 retry.go:31] will retry after 1.517501093s: waiting for machine to come up
	I0817 21:23:36.137891  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:36.138270  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:36.138299  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:36.138211  219202 retry.go:31] will retry after 2.092047468s: waiting for machine to come up
	I0817 21:23:38.232196  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:38.232617  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:38.232676  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:38.232587  219202 retry.go:31] will retry after 2.89137893s: waiting for machine to come up
	I0817 21:23:41.127996  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:41.128403  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:41.128427  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:41.128356  219202 retry.go:31] will retry after 2.434521508s: waiting for machine to come up
	I0817 21:23:43.564941  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:43.565246  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:43.565287  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:43.565217  219202 retry.go:31] will retry after 3.857522187s: waiting for machine to come up
	I0817 21:23:47.424211  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:47.424761  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find current IP address of domain ingress-addon-legacy-449686 in network mk-ingress-addon-legacy-449686
	I0817 21:23:47.424789  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | I0817 21:23:47.424617  219202 retry.go:31] will retry after 4.993129418s: waiting for machine to come up
	I0817 21:23:52.423382  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:52.423804  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Found IP for machine: 192.168.39.250
	I0817 21:23:52.423828  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Reserving static IP address...
	I0817 21:23:52.423838  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has current primary IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:52.424121  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-449686", mac: "52:54:00:86:52:c4", ip: "192.168.39.250"} in network mk-ingress-addon-legacy-449686
	I0817 21:23:52.500050  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Getting to WaitForSSH function...
	I0817 21:23:52.500087  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Reserved static IP address: 192.168.39.250
	I0817 21:23:52.500100  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Waiting for SSH to be available...
	I0817 21:23:52.502989  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:52.503428  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686
	I0817 21:23:52.503461  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-449686 interface with MAC address 52:54:00:86:52:c4
	I0817 21:23:52.503573  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Using SSH client type: external
	I0817 21:23:52.503604  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa (-rw-------)
	I0817 21:23:52.503653  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 21:23:52.503697  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | About to run SSH command:
	I0817 21:23:52.503714  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | exit 0
	I0817 21:23:52.507985  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | SSH cmd err, output: exit status 255: 
	I0817 21:23:52.508014  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0817 21:23:52.508023  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | command : exit 0
	I0817 21:23:52.508029  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | err     : exit status 255
	I0817 21:23:52.508038  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | output  : 
	I0817 21:23:55.510669  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Getting to WaitForSSH function...
	I0817 21:23:55.513334  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:55.513833  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:55.513869  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:55.514049  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Using SSH client type: external
	I0817 21:23:55.514090  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa (-rw-------)
	I0817 21:23:55.514122  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 21:23:55.514143  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | About to run SSH command:
	I0817 21:23:55.514182  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | exit 0
	I0817 21:23:55.606364  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | SSH cmd err, output: <nil>: 
	I0817 21:23:55.606654  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) KVM machine creation complete!
	I0817 21:23:55.607036  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetConfigRaw
	I0817 21:23:55.652868  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:23:55.653213  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:23:55.653446  219168 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0817 21:23:55.653468  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetState
	I0817 21:23:55.655096  219168 main.go:141] libmachine: Detecting operating system of created instance...
	I0817 21:23:55.655118  219168 main.go:141] libmachine: Waiting for SSH to be available...
	I0817 21:23:55.655126  219168 main.go:141] libmachine: Getting to WaitForSSH function...
	I0817 21:23:55.655133  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:55.657691  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:55.658120  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:55.658150  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:55.658352  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:55.658567  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:55.658725  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:55.658857  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:55.659148  219168 main.go:141] libmachine: Using SSH client type: native
	I0817 21:23:55.659649  219168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0817 21:23:55.659662  219168 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0817 21:23:55.782230  219168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:23:55.782261  219168 main.go:141] libmachine: Detecting the provisioner...
	I0817 21:23:55.782269  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:55.785165  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:55.785582  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:55.785623  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:55.785787  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:55.786000  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:55.786198  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:55.786353  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:55.786578  219168 main.go:141] libmachine: Using SSH client type: native
	I0817 21:23:55.787185  219168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0817 21:23:55.787204  219168 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0817 21:23:55.915748  219168 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0817 21:23:55.915949  219168 main.go:141] libmachine: found compatible host: buildroot
	I0817 21:23:55.915967  219168 main.go:141] libmachine: Provisioning with buildroot...
	I0817 21:23:55.915980  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetMachineName
	I0817 21:23:55.916313  219168 buildroot.go:166] provisioning hostname "ingress-addon-legacy-449686"
	I0817 21:23:55.916354  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetMachineName
	I0817 21:23:55.916568  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:55.919639  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:55.920102  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:55.920151  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:55.920410  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:55.920644  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:55.920821  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:55.920972  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:55.921118  219168 main.go:141] libmachine: Using SSH client type: native
	I0817 21:23:55.921539  219168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0817 21:23:55.921561  219168 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-449686 && echo "ingress-addon-legacy-449686" | sudo tee /etc/hostname
	I0817 21:23:56.064264  219168 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-449686
	
	I0817 21:23:56.064307  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:56.067263  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.067641  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:56.067680  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.067847  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:56.068110  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:56.068275  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:56.068448  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:56.068597  219168 main.go:141] libmachine: Using SSH client type: native
	I0817 21:23:56.069034  219168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0817 21:23:56.069060  219168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-449686' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-449686/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-449686' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:23:56.203345  219168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:23:56.203383  219168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 21:23:56.203407  219168 buildroot.go:174] setting up certificates
	I0817 21:23:56.203418  219168 provision.go:83] configureAuth start
	I0817 21:23:56.203427  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetMachineName
	I0817 21:23:56.203741  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetIP
	I0817 21:23:56.207012  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.207330  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:56.207377  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.207661  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:56.210245  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.210574  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:56.210606  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.210855  219168 provision.go:138] copyHostCerts
	I0817 21:23:56.210886  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:23:56.210925  219168 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 21:23:56.210941  219168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:23:56.211019  219168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 21:23:56.211125  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:23:56.211149  219168 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 21:23:56.211158  219168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:23:56.211196  219168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 21:23:56.211303  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:23:56.211333  219168 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 21:23:56.211339  219168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:23:56.211374  219168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 21:23:56.211446  219168 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-449686 san=[192.168.39.250 192.168.39.250 localhost 127.0.0.1 minikube ingress-addon-legacy-449686]
	I0817 21:23:56.326890  219168 provision.go:172] copyRemoteCerts
	I0817 21:23:56.326970  219168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:23:56.327002  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:56.330279  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.330629  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:56.330668  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.330867  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:56.331052  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:56.331229  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:56.331359  219168 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa Username:docker}
	I0817 21:23:56.423314  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:23:56.423386  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 21:23:56.447029  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:23:56.447105  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0817 21:23:56.470768  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:23:56.470842  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:23:56.494385  219168 provision.go:86] duration metric: configureAuth took 290.954641ms
	I0817 21:23:56.494414  219168 buildroot.go:189] setting minikube options for container-runtime
	I0817 21:23:56.494596  219168 config.go:182] Loaded profile config "ingress-addon-legacy-449686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0817 21:23:56.494675  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:56.497530  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.497857  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:56.497896  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.498116  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:56.498294  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:56.498455  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:56.498566  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:56.498748  219168 main.go:141] libmachine: Using SSH client type: native
	I0817 21:23:56.499146  219168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0817 21:23:56.499163  219168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:23:56.815158  219168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:23:56.815191  219168 main.go:141] libmachine: Checking connection to Docker...
	I0817 21:23:56.815205  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetURL
	I0817 21:23:56.816536  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Using libvirt version 6000000
	I0817 21:23:56.818565  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.818898  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:56.818929  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.819093  219168 main.go:141] libmachine: Docker is up and running!
	I0817 21:23:56.819108  219168 main.go:141] libmachine: Reticulating splines...
	I0817 21:23:56.819115  219168 client.go:171] LocalClient.Create took 29.217246365s
	I0817 21:23:56.819141  219168 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-449686" took 29.217310928s
	I0817 21:23:56.819157  219168 start.go:300] post-start starting for "ingress-addon-legacy-449686" (driver="kvm2")
	I0817 21:23:56.819175  219168 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:23:56.819203  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:23:56.819505  219168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:23:56.819539  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:56.821843  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.822170  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:56.822210  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.822402  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:56.822616  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:56.822840  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:56.823021  219168 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa Username:docker}
	I0817 21:23:56.915876  219168 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:23:56.920288  219168 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 21:23:56.920314  219168 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 21:23:56.920381  219168 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 21:23:56.920476  219168 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 21:23:56.920492  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /etc/ssl/certs/2106702.pem
	I0817 21:23:56.920616  219168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:23:56.929222  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:23:56.952404  219168 start.go:303] post-start completed in 133.231136ms
	I0817 21:23:56.952471  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetConfigRaw
	I0817 21:23:56.953215  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetIP
	I0817 21:23:56.956466  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.956846  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:56.956900  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.957225  219168 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/config.json ...
	I0817 21:23:56.957414  219168 start.go:128] duration metric: createHost completed in 29.375089246s
	I0817 21:23:56.957440  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:56.959903  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.960343  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:56.960387  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:56.960601  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:56.960864  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:56.961057  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:56.961265  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:56.961440  219168 main.go:141] libmachine: Using SSH client type: native
	I0817 21:23:56.961854  219168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0817 21:23:56.961867  219168 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 21:23:57.086852  219168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692307437.075768636
	
	I0817 21:23:57.086877  219168 fix.go:206] guest clock: 1692307437.075768636
	I0817 21:23:57.086885  219168 fix.go:219] Guest: 2023-08-17 21:23:57.075768636 +0000 UTC Remote: 2023-08-17 21:23:56.957426757 +0000 UTC m=+34.254469669 (delta=118.341879ms)
	I0817 21:23:57.086905  219168 fix.go:190] guest clock delta is within tolerance: 118.341879ms
	I0817 21:23:57.086909  219168 start.go:83] releasing machines lock for "ingress-addon-legacy-449686", held for 29.504704095s
	I0817 21:23:57.086931  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:23:57.087203  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetIP
	I0817 21:23:57.090155  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:57.090461  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:57.090487  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:57.090678  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:23:57.091203  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:23:57.091408  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:23:57.091509  219168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:23:57.091566  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:57.091616  219168 ssh_runner.go:195] Run: cat /version.json
	I0817 21:23:57.091641  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:23:57.094127  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:57.094423  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:57.094467  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:57.094494  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:57.094647  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:57.094845  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:57.094940  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:57.094982  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:57.095027  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:57.095134  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:23:57.095215  219168 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa Username:docker}
	I0817 21:23:57.095281  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:23:57.095385  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:23:57.095538  219168 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa Username:docker}
	I0817 21:23:57.206471  219168 ssh_runner.go:195] Run: systemctl --version
	I0817 21:23:57.212422  219168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:23:57.369897  219168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 21:23:57.377128  219168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 21:23:57.377223  219168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:23:57.392893  219168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 21:23:57.392923  219168 start.go:466] detecting cgroup driver to use...
	I0817 21:23:57.393003  219168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:23:57.406011  219168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:23:57.418254  219168 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:23:57.418319  219168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:23:57.431671  219168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:23:57.444677  219168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:23:57.545731  219168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:23:57.666856  219168 docker.go:212] disabling docker service ...
	I0817 21:23:57.666950  219168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:23:57.680153  219168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:23:57.692552  219168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:23:57.800624  219168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:23:57.907266  219168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:23:57.919675  219168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:23:57.937468  219168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0817 21:23:57.937546  219168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:23:57.946917  219168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:23:57.946995  219168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:23:57.956325  219168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:23:57.965708  219168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:23:57.975361  219168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:23:57.985128  219168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:23:57.993710  219168 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 21:23:57.993794  219168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 21:23:58.006708  219168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:23:58.015340  219168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:23:58.126431  219168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:23:58.302074  219168 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:23:58.302198  219168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:23:58.309823  219168 start.go:534] Will wait 60s for crictl version
	I0817 21:23:58.309883  219168 ssh_runner.go:195] Run: which crictl
	I0817 21:23:58.313511  219168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:23:58.344022  219168 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 21:23:58.344121  219168 ssh_runner.go:195] Run: crio --version
	I0817 21:23:58.398138  219168 ssh_runner.go:195] Run: crio --version
	I0817 21:23:58.447426  219168 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0817 21:23:58.449041  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetIP
	I0817 21:23:58.452010  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:58.452471  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:23:58.452506  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:23:58.452865  219168 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 21:23:58.457296  219168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:23:58.470658  219168 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0817 21:23:58.470719  219168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:23:58.501343  219168 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0817 21:23:58.501413  219168 ssh_runner.go:195] Run: which lz4
	I0817 21:23:58.505264  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0817 21:23:58.505376  219168 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 21:23:58.509587  219168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 21:23:58.509613  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0817 21:24:00.454407  219168 crio.go:444] Took 1.949066 seconds to copy over tarball
	I0817 21:24:00.454487  219168 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 21:24:03.728425  219168 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.273896868s)
	I0817 21:24:03.728473  219168 crio.go:451] Took 3.274030 seconds to extract the tarball
	I0817 21:24:03.728488  219168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 21:24:03.776587  219168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:24:03.834220  219168 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0817 21:24:03.834251  219168 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 21:24:03.834337  219168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:24:03.834349  219168 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:24:03.834385  219168 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0817 21:24:03.834431  219168 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:24:03.834555  219168 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0817 21:24:03.834622  219168 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:24:03.834641  219168 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0817 21:24:03.834772  219168 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:24:03.835703  219168 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0817 21:24:03.835778  219168 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0817 21:24:03.835797  219168 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:24:03.835799  219168 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:24:03.835800  219168 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0817 21:24:03.835841  219168 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:24:03.835800  219168 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:24:03.835711  219168 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:24:04.009940  219168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:24:04.014736  219168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:24:04.016385  219168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:24:04.021645  219168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:24:04.026516  219168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0817 21:24:04.037534  219168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0817 21:24:04.119431  219168 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0817 21:24:04.119478  219168 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:24:04.119521  219168 ssh_runner.go:195] Run: which crictl
	I0817 21:24:04.124973  219168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:24:04.154792  219168 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0817 21:24:04.154873  219168 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:24:04.154942  219168 ssh_runner.go:195] Run: which crictl
	I0817 21:24:04.162451  219168 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0817 21:24:04.162509  219168 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:24:04.162569  219168 ssh_runner.go:195] Run: which crictl
	I0817 21:24:04.168063  219168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0817 21:24:04.202484  219168 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0817 21:24:04.202538  219168 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0817 21:24:04.202549  219168 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:24:04.202574  219168 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0817 21:24:04.202622  219168 ssh_runner.go:195] Run: which crictl
	I0817 21:24:04.202622  219168 ssh_runner.go:195] Run: which crictl
	I0817 21:24:04.221700  219168 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0817 21:24:04.221762  219168 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0817 21:24:04.221818  219168 ssh_runner.go:195] Run: which crictl
	I0817 21:24:04.221821  219168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:24:04.332873  219168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:24:04.332920  219168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:24:04.333010  219168 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0817 21:24:04.333046  219168 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0817 21:24:04.333082  219168 ssh_runner.go:195] Run: which crictl
	I0817 21:24:04.333104  219168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:24:04.333167  219168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0817 21:24:04.333201  219168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0817 21:24:04.333243  219168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0817 21:24:04.417478  219168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0817 21:24:04.417539  219168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0817 21:24:04.417614  219168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0817 21:24:04.417690  219168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0817 21:24:04.429533  219168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0817 21:24:04.432362  219168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0817 21:24:04.459315  219168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0817 21:24:04.459393  219168 cache_images.go:92] LoadImages completed in 625.128381ms
	W0817 21:24:04.459503  219168 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0817 21:24:04.459596  219168 ssh_runner.go:195] Run: crio config
	I0817 21:24:04.521904  219168 cni.go:84] Creating CNI manager for ""
	I0817 21:24:04.521930  219168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 21:24:04.521956  219168 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:24:04.521982  219168 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-449686 NodeName:ingress-addon-legacy-449686 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0817 21:24:04.522186  219168 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-449686"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:24:04.522314  219168 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-449686 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-449686 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:24:04.522398  219168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0817 21:24:04.532392  219168 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:24:04.532473  219168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:24:04.542086  219168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0817 21:24:04.559863  219168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0817 21:24:04.576505  219168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0817 21:24:04.593566  219168 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0817 21:24:04.597521  219168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:24:04.609312  219168 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686 for IP: 192.168.39.250
	I0817 21:24:04.609356  219168 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:24:04.609537  219168 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 21:24:04.609592  219168 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 21:24:04.609645  219168 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.key
	I0817 21:24:04.609659  219168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt with IP's: []
	I0817 21:24:04.858360  219168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt ...
	I0817 21:24:04.858399  219168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: {Name:mk635d79eb4f9791bc418896161b355aeb0e5ff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:24:04.858622  219168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.key ...
	I0817 21:24:04.858640  219168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.key: {Name:mk9aa91af386c7516896a96dc6038f3e9416d16e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:24:04.858750  219168 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.key.6e35f005
	I0817 21:24:04.858774  219168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.crt.6e35f005 with IP's: [192.168.39.250 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 21:24:04.904728  219168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.crt.6e35f005 ...
	I0817 21:24:04.904765  219168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.crt.6e35f005: {Name:mkf4e5ac5030b029947f886e55472bb9c7bae655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:24:04.904984  219168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.key.6e35f005 ...
	I0817 21:24:04.905005  219168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.key.6e35f005: {Name:mkb3013babe96e324ccc97ab8e57be6339350a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:24:04.905106  219168 certs.go:337] copying /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.crt.6e35f005 -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.crt
	I0817 21:24:04.905206  219168 certs.go:341] copying /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.key.6e35f005 -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.key
	I0817 21:24:04.905276  219168 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.key
	I0817 21:24:04.905297  219168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.crt with IP's: []
	I0817 21:24:05.145296  219168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.crt ...
	I0817 21:24:05.145339  219168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.crt: {Name:mk4ea8883bb4111ebb51da8420713f5636582e7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:24:05.145536  219168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.key ...
	I0817 21:24:05.145555  219168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.key: {Name:mk907de1386a3f0d4d84e1222117d87cb386e8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:24:05.145663  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0817 21:24:05.145691  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0817 21:24:05.145710  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0817 21:24:05.145729  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0817 21:24:05.145780  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:24:05.145809  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:24:05.145832  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:24:05.145849  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:24:05.145926  219168 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 21:24:05.145977  219168 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 21:24:05.145987  219168 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:24:05.146020  219168 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 21:24:05.146076  219168 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:24:05.146111  219168 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 21:24:05.146179  219168 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:24:05.146228  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem -> /usr/share/ca-certificates/210670.pem
	I0817 21:24:05.146248  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /usr/share/ca-certificates/2106702.pem
	I0817 21:24:05.146263  219168 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:24:05.146894  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:24:05.170762  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 21:24:05.193689  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:24:05.216476  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 21:24:05.239501  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:24:05.262630  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:24:05.285651  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:24:05.307807  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:24:05.331282  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 21:24:05.353971  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 21:24:05.376290  219168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:24:05.398907  219168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:24:05.415102  219168 ssh_runner.go:195] Run: openssl version
	I0817 21:24:05.420763  219168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 21:24:05.431417  219168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 21:24:05.436219  219168 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:24:05.436312  219168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 21:24:05.441950  219168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 21:24:05.452930  219168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 21:24:05.464147  219168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 21:24:05.468747  219168 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:24:05.468802  219168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 21:24:05.474153  219168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:24:05.484864  219168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:24:05.495630  219168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:24:05.500488  219168 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:24:05.500562  219168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:24:05.506017  219168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:24:05.517352  219168 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:24:05.521676  219168 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:24:05.521736  219168 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-449686 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-
addon-legacy-449686 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:24:05.521828  219168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 21:24:05.521878  219168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:24:05.552718  219168 cri.go:89] found id: ""
	I0817 21:24:05.552789  219168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:24:05.562857  219168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:24:05.572298  219168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:24:05.583400  219168 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:24:05.583467  219168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0817 21:24:05.645383  219168 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0817 21:24:05.645698  219168 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 21:24:05.758652  219168 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:24:05.758823  219168 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:24:05.758985  219168 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:24:05.929975  219168 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:24:05.930116  219168 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:24:05.930158  219168 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 21:24:06.047451  219168 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:24:06.050902  219168 out.go:204]   - Generating certificates and keys ...
	I0817 21:24:06.051011  219168 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 21:24:06.051093  219168 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 21:24:06.365827  219168 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:24:06.610879  219168 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:24:06.825837  219168 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0817 21:24:07.140284  219168 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0817 21:24:07.320401  219168 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0817 21:24:07.320603  219168 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-449686 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I0817 21:24:07.429605  219168 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0817 21:24:07.429860  219168 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-449686 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I0817 21:24:07.871702  219168 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:24:07.968964  219168 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:24:08.155640  219168 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0817 21:24:08.156025  219168 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:24:08.316091  219168 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:24:08.446953  219168 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:24:08.671140  219168 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:24:08.792311  219168 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:24:08.793331  219168 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:24:08.795312  219168 out.go:204]   - Booting up control plane ...
	I0817 21:24:08.795425  219168 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:24:08.799166  219168 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:24:08.801088  219168 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:24:08.804308  219168 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:24:08.807072  219168 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:24:18.310739  219168 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502968 seconds
	I0817 21:24:18.310945  219168 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:24:18.327154  219168 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:24:18.851796  219168 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:24:18.851958  219168 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-449686 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0817 21:24:19.368053  219168 kubeadm.go:322] [bootstrap-token] Using token: hp6vnd.sd2qs2p0m4yh39bf
	I0817 21:24:19.369585  219168 out.go:204]   - Configuring RBAC rules ...
	I0817 21:24:19.369801  219168 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:24:19.387218  219168 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:24:19.399596  219168 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:24:19.407092  219168 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:24:19.421061  219168 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:24:19.427417  219168 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:24:19.456698  219168 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:24:19.770886  219168 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 21:24:19.836620  219168 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 21:24:19.836653  219168 kubeadm.go:322] 
	I0817 21:24:19.836775  219168 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 21:24:19.836791  219168 kubeadm.go:322] 
	I0817 21:24:19.836894  219168 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 21:24:19.836904  219168 kubeadm.go:322] 
	I0817 21:24:19.836938  219168 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 21:24:19.837009  219168 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:24:19.837067  219168 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:24:19.837074  219168 kubeadm.go:322] 
	I0817 21:24:19.837187  219168 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 21:24:19.837328  219168 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:24:19.837426  219168 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:24:19.837436  219168 kubeadm.go:322] 
	I0817 21:24:19.837552  219168 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:24:19.837658  219168 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 21:24:19.837668  219168 kubeadm.go:322] 
	I0817 21:24:19.837799  219168 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hp6vnd.sd2qs2p0m4yh39bf \
	I0817 21:24:19.837941  219168 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 21:24:19.837990  219168 kubeadm.go:322]     --control-plane 
	I0817 21:24:19.838000  219168 kubeadm.go:322] 
	I0817 21:24:19.838115  219168 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:24:19.838126  219168 kubeadm.go:322] 
	I0817 21:24:19.838223  219168 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hp6vnd.sd2qs2p0m4yh39bf \
	I0817 21:24:19.838416  219168 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 21:24:19.838614  219168 kubeadm.go:322] W0817 21:24:05.639680     966 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0817 21:24:19.838748  219168 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:24:19.838933  219168 kubeadm.go:322] W0817 21:24:08.794822     966 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0817 21:24:19.839116  219168 kubeadm.go:322] W0817 21:24:08.796773     966 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0817 21:24:19.839132  219168 cni.go:84] Creating CNI manager for ""
	I0817 21:24:19.839142  219168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 21:24:19.841298  219168 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 21:24:19.842932  219168 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 21:24:19.861627  219168 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 21:24:19.885855  219168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 21:24:19.885969  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:19.885977  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=ingress-addon-legacy-449686 minikube.k8s.io/updated_at=2023_08_17T21_24_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:19.917008  219168 ops.go:34] apiserver oom_adj: -16
	I0817 21:24:20.113095  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:20.347065  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:20.969069  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:21.469196  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:21.969146  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:22.469437  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:22.969413  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:23.469364  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:23.968636  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:24.468618  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:24.969532  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:25.468664  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:25.968442  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:26.468786  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:26.968946  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:27.468678  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:27.969001  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:28.468718  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:28.969133  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:29.469009  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:29.968411  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:30.468579  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:30.969041  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:31.469284  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:31.968857  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:32.469450  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:32.968903  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:33.468799  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:33.968592  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:34.468833  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:34.968568  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:35.468695  219168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:24:35.669628  219168 kubeadm.go:1081] duration metric: took 15.783736693s to wait for elevateKubeSystemPrivileges.
	I0817 21:24:35.669662  219168 kubeadm.go:406] StartCluster complete in 30.147933765s
	I0817 21:24:35.669679  219168 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:24:35.669789  219168 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:24:35.670734  219168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:24:35.671779  219168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:24:35.671817  219168 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 21:24:35.671950  219168 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-449686"
	I0817 21:24:35.671967  219168 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-449686"
	I0817 21:24:35.671973  219168 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-449686"
	I0817 21:24:35.671987  219168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-449686"
	I0817 21:24:35.672041  219168 host.go:66] Checking if "ingress-addon-legacy-449686" exists ...
	I0817 21:24:35.672066  219168 config.go:182] Loaded profile config "ingress-addon-legacy-449686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0817 21:24:35.672549  219168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:24:35.672591  219168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:24:35.672601  219168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:24:35.672649  219168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:24:35.672552  219168 kapi.go:59] client config for ingress-addon-legacy-449686: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:24:35.673584  219168 cert_rotation.go:137] Starting client certificate rotation controller
	I0817 21:24:35.689245  219168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0817 21:24:35.689293  219168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0817 21:24:35.689719  219168 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:24:35.689788  219168 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:24:35.690284  219168 main.go:141] libmachine: Using API Version  1
	I0817 21:24:35.690312  219168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:24:35.690335  219168 main.go:141] libmachine: Using API Version  1
	I0817 21:24:35.690363  219168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:24:35.690703  219168 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:24:35.690709  219168 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:24:35.690917  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetState
	I0817 21:24:35.691252  219168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:24:35.691299  219168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:24:35.694078  219168 kapi.go:59] client config for ingress-addon-legacy-449686: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:24:35.706382  219168 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-449686"
	I0817 21:24:35.706447  219168 host.go:66] Checking if "ingress-addon-legacy-449686" exists ...
	I0817 21:24:35.706873  219168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:24:35.706933  219168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:24:35.707475  219168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0817 21:24:35.707957  219168 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:24:35.708471  219168 main.go:141] libmachine: Using API Version  1
	I0817 21:24:35.708501  219168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:24:35.708929  219168 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:24:35.709180  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetState
	I0817 21:24:35.711066  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:24:35.713262  219168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:24:35.714934  219168 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:24:35.714956  219168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:24:35.714979  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	W0817 21:24:35.715590  219168 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-449686" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0817 21:24:35.715623  219168 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0817 21:24:35.715649  219168 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:24:35.717472  219168 out.go:177] * Verifying Kubernetes components...
	I0817 21:24:35.719154  219168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:24:35.718741  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:24:35.719273  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:24:35.719315  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:24:35.719503  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:24:35.719736  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:24:35.719914  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:24:35.720057  219168 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa Username:docker}
	I0817 21:24:35.724322  219168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I0817 21:24:35.724731  219168 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:24:35.725245  219168 main.go:141] libmachine: Using API Version  1
	I0817 21:24:35.725277  219168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:24:35.725619  219168 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:24:35.726096  219168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:24:35.726142  219168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:24:35.741071  219168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
	I0817 21:24:35.741592  219168 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:24:35.742178  219168 main.go:141] libmachine: Using API Version  1
	I0817 21:24:35.742201  219168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:24:35.742530  219168 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:24:35.742704  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetState
	I0817 21:24:35.744177  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .DriverName
	I0817 21:24:35.744451  219168 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 21:24:35.744467  219168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 21:24:35.744484  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHHostname
	I0817 21:24:35.747263  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:24:35.747837  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:52:c4", ip: ""} in network mk-ingress-addon-legacy-449686: {Iface:virbr1 ExpiryTime:2023-08-17 22:23:43 +0000 UTC Type:0 Mac:52:54:00:86:52:c4 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-449686 Clientid:01:52:54:00:86:52:c4}
	I0817 21:24:35.747871  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | domain ingress-addon-legacy-449686 has defined IP address 192.168.39.250 and MAC address 52:54:00:86:52:c4 in network mk-ingress-addon-legacy-449686
	I0817 21:24:35.747987  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHPort
	I0817 21:24:35.748190  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHKeyPath
	I0817 21:24:35.748340  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .GetSSHUsername
	I0817 21:24:35.748477  219168 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/ingress-addon-legacy-449686/id_rsa Username:docker}
	I0817 21:24:35.842302  219168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 21:24:35.842943  219168 kapi.go:59] client config for ingress-addon-legacy-449686: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:24:35.843294  219168 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-449686" to be "Ready" ...
	I0817 21:24:35.851447  219168 node_ready.go:49] node "ingress-addon-legacy-449686" has status "Ready":"True"
	I0817 21:24:35.851477  219168 node_ready.go:38] duration metric: took 8.145705ms waiting for node "ingress-addon-legacy-449686" to be "Ready" ...
	I0817 21:24:35.851492  219168 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:24:35.862964  219168 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-2skxw" in "kube-system" namespace to be "Ready" ...
	I0817 21:24:35.867547  219168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:24:35.892076  219168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 21:24:36.554940  219168 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0817 21:24:36.630119  219168 main.go:141] libmachine: Making call to close driver server
	I0817 21:24:36.630150  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .Close
	I0817 21:24:36.630177  219168 main.go:141] libmachine: Making call to close driver server
	I0817 21:24:36.630218  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .Close
	I0817 21:24:36.630518  219168 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:24:36.630535  219168 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:24:36.630546  219168 main.go:141] libmachine: Making call to close driver server
	I0817 21:24:36.630558  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .Close
	I0817 21:24:36.630655  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Closing plugin on server side
	I0817 21:24:36.630656  219168 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:24:36.630688  219168 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:24:36.630699  219168 main.go:141] libmachine: Making call to close driver server
	I0817 21:24:36.630713  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .Close
	I0817 21:24:36.630797  219168 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:24:36.630798  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Closing plugin on server side
	I0817 21:24:36.630808  219168 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:24:36.630822  219168 main.go:141] libmachine: Making call to close driver server
	I0817 21:24:36.630834  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) Calling .Close
	I0817 21:24:36.630948  219168 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:24:36.630959  219168 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:24:36.632145  219168 main.go:141] libmachine: (ingress-addon-legacy-449686) DBG | Closing plugin on server side
	I0817 21:24:36.632248  219168 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:24:36.632258  219168 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:24:36.634781  219168 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 21:24:36.636279  219168 addons.go:502] enable addons completed in 964.488008ms: enabled=[storage-provisioner default-storageclass]
	I0817 21:24:37.881168  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:24:40.381449  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:24:42.881535  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:24:44.882274  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:24:47.379774  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:24:49.381202  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:24:51.882290  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:24:54.380452  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:24:56.381047  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:24:58.381349  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:25:00.381967  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:25:02.881576  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:25:04.882408  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:25:07.380380  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:25:09.382669  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:25:11.881560  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:25:14.381522  219168 pod_ready.go:102] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"False"
	I0817 21:25:15.381633  219168 pod_ready.go:92] pod "coredns-66bff467f8-2skxw" in "kube-system" namespace has status "Ready":"True"
	I0817 21:25:15.381669  219168 pod_ready.go:81] duration metric: took 39.518666257s waiting for pod "coredns-66bff467f8-2skxw" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:15.381680  219168 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-l8j5l" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:15.901905  219168 pod_ready.go:92] pod "coredns-66bff467f8-l8j5l" in "kube-system" namespace has status "Ready":"True"
	I0817 21:25:15.901940  219168 pod_ready.go:81] duration metric: took 520.252995ms waiting for pod "coredns-66bff467f8-l8j5l" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:15.901954  219168 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-449686" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:15.907828  219168 pod_ready.go:92] pod "etcd-ingress-addon-legacy-449686" in "kube-system" namespace has status "Ready":"True"
	I0817 21:25:15.907860  219168 pod_ready.go:81] duration metric: took 5.896951ms waiting for pod "etcd-ingress-addon-legacy-449686" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:15.907873  219168 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-449686" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:15.912653  219168 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-449686" in "kube-system" namespace has status "Ready":"True"
	I0817 21:25:15.912674  219168 pod_ready.go:81] duration metric: took 4.792774ms waiting for pod "kube-apiserver-ingress-addon-legacy-449686" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:15.912685  219168 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-449686" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:15.975056  219168 request.go:628] Waited for 62.28493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ingress-addon-legacy-449686
	I0817 21:25:16.175084  219168 request.go:628] Waited for 196.403826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ingress-addon-legacy-449686
	I0817 21:25:16.178737  219168 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-449686" in "kube-system" namespace has status "Ready":"True"
	I0817 21:25:16.178770  219168 pod_ready.go:81] duration metric: took 266.076675ms waiting for pod "kube-controller-manager-ingress-addon-legacy-449686" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:16.178786  219168 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-449686" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:16.375248  219168 request.go:628] Waited for 196.356533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-449686
	I0817 21:25:16.575331  219168 request.go:628] Waited for 195.393565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ingress-addon-legacy-449686
	I0817 21:25:16.578658  219168 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-449686" in "kube-system" namespace has status "Ready":"True"
	I0817 21:25:16.578689  219168 pod_ready.go:81] duration metric: took 399.890285ms waiting for pod "kube-scheduler-ingress-addon-legacy-449686" in "kube-system" namespace to be "Ready" ...
	I0817 21:25:16.578711  219168 pod_ready.go:38] duration metric: took 40.727195569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:25:16.578737  219168 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:25:16.578802  219168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:25:16.593920  219168 api_server.go:72] duration metric: took 40.878222953s to wait for apiserver process to appear ...
	I0817 21:25:16.593948  219168 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:25:16.593968  219168 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0817 21:25:16.600845  219168 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0817 21:25:16.601908  219168 api_server.go:141] control plane version: v1.18.20
	I0817 21:25:16.601931  219168 api_server.go:131] duration metric: took 7.977045ms to wait for apiserver health ...
	I0817 21:25:16.601939  219168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:25:16.775425  219168 request.go:628] Waited for 173.389973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0817 21:25:16.783016  219168 system_pods.go:59] 8 kube-system pods found
	I0817 21:25:16.783045  219168 system_pods.go:61] "coredns-66bff467f8-2skxw" [c26ec681-9602-47cc-b716-2c2962830e3d] Running
	I0817 21:25:16.783050  219168 system_pods.go:61] "coredns-66bff467f8-l8j5l" [39f260bd-18a1-4730-ad26-66cfcf02fbf7] Running
	I0817 21:25:16.783054  219168 system_pods.go:61] "etcd-ingress-addon-legacy-449686" [2f95a81e-2628-4cb9-96ed-2e10ff4c35e5] Running
	I0817 21:25:16.783062  219168 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-449686" [e6bd3fd1-6965-4419-94b1-d90dd17f1a86] Running
	I0817 21:25:16.783067  219168 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-449686" [8f698ff8-83d7-4d44-b5e0-ae6fe454edc3] Running
	I0817 21:25:16.783074  219168 system_pods.go:61] "kube-proxy-8x7vz" [ba2e6c04-5685-453f-971e-6a2373e5989e] Running
	I0817 21:25:16.783078  219168 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-449686" [af97dbe3-748b-49ab-a1a1-db4cd94f53e9] Running
	I0817 21:25:16.783082  219168 system_pods.go:61] "storage-provisioner" [d5da71a6-92ed-4913-9d05-f84eef964002] Running
	I0817 21:25:16.783088  219168 system_pods.go:74] duration metric: took 181.144038ms to wait for pod list to return data ...
	I0817 21:25:16.783100  219168 default_sa.go:34] waiting for default service account to be created ...
	I0817 21:25:16.974512  219168 request.go:628] Waited for 191.322794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I0817 21:25:16.977702  219168 default_sa.go:45] found service account: "default"
	I0817 21:25:16.977728  219168 default_sa.go:55] duration metric: took 194.621859ms for default service account to be created ...
	I0817 21:25:16.977736  219168 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 21:25:17.175215  219168 request.go:628] Waited for 197.402134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0817 21:25:17.182873  219168 system_pods.go:86] 8 kube-system pods found
	I0817 21:25:17.182906  219168 system_pods.go:89] "coredns-66bff467f8-2skxw" [c26ec681-9602-47cc-b716-2c2962830e3d] Running
	I0817 21:25:17.182911  219168 system_pods.go:89] "coredns-66bff467f8-l8j5l" [39f260bd-18a1-4730-ad26-66cfcf02fbf7] Running
	I0817 21:25:17.182915  219168 system_pods.go:89] "etcd-ingress-addon-legacy-449686" [2f95a81e-2628-4cb9-96ed-2e10ff4c35e5] Running
	I0817 21:25:17.182919  219168 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-449686" [e6bd3fd1-6965-4419-94b1-d90dd17f1a86] Running
	I0817 21:25:17.182924  219168 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-449686" [8f698ff8-83d7-4d44-b5e0-ae6fe454edc3] Running
	I0817 21:25:17.182927  219168 system_pods.go:89] "kube-proxy-8x7vz" [ba2e6c04-5685-453f-971e-6a2373e5989e] Running
	I0817 21:25:17.182931  219168 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-449686" [af97dbe3-748b-49ab-a1a1-db4cd94f53e9] Running
	I0817 21:25:17.182935  219168 system_pods.go:89] "storage-provisioner" [d5da71a6-92ed-4913-9d05-f84eef964002] Running
	I0817 21:25:17.182940  219168 system_pods.go:126] duration metric: took 205.199828ms to wait for k8s-apps to be running ...
	I0817 21:25:17.182948  219168 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:25:17.182993  219168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:25:17.195173  219168 system_svc.go:56] duration metric: took 12.211551ms WaitForService to wait for kubelet.
	I0817 21:25:17.195203  219168 kubeadm.go:581] duration metric: took 41.47950989s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:25:17.195229  219168 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:25:17.374631  219168 request.go:628] Waited for 179.308579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I0817 21:25:17.378247  219168 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:25:17.378276  219168 node_conditions.go:123] node cpu capacity is 2
	I0817 21:25:17.378288  219168 node_conditions.go:105] duration metric: took 183.053992ms to run NodePressure ...
	I0817 21:25:17.378299  219168 start.go:228] waiting for startup goroutines ...
	I0817 21:25:17.378305  219168 start.go:233] waiting for cluster config update ...
	I0817 21:25:17.378324  219168 start.go:242] writing updated cluster config ...
	I0817 21:25:17.378612  219168 ssh_runner.go:195] Run: rm -f paused
	I0817 21:25:17.430637  219168 start.go:600] kubectl: 1.28.0, cluster: 1.18.20 (minor skew: 10)
	I0817 21:25:17.432877  219168 out.go:177] 
	W0817 21:25:17.434476  219168 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0817 21:25:17.435914  219168 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0817 21:25:17.437504  219168 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-449686" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 21:23:40 UTC, ends at Thu 2023-08-17 21:28:18 UTC. --
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.316921965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c024d3151a489a9c081c5540f09b11825a9507d22c4687d161b754688df0d1,PodSandboxId:9797c54df79e0a25e2ea401f5aa38f1ff3c73e7402653e21104baafb75b70b44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692307685570660514,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-nmfzr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6e810be-7dae-4a0c-a160-9cf04dae47f9,},Annotations:map[string]string{io.kubernetes.container.hash: a16f53b6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef542c2453e6ecdc796bfcde08601b72a2d17c6275721a0016bde82c2d5f1aa,PodSandboxId:4117cf4a9c036fedda86c2c4005b622d124a53cebcb70c4a2e7d52b4eb6ac614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692307547292096254,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3de40a2f-a52b-425b-923e-ec0ead15b733,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8afa7b1a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130,PodSandboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692307508508839080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f,PodSandboxId:d4e1bffafab34fc0f22e755113739d206d20c31cfb9ea2da329d8139dbbbd48e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1692307478268988558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8x7vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2e6c04-5685-453f-971e-6a2373
e5989e,},Annotations:map[string]string{io.kubernetes.container.hash: ed0159ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8,PodSandboxId:8939a4e49b3d8f95689037ff7001023353d9272cd1a6a8cd3b73d18b95700d6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307478162333841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-l8j5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f260bd-18a1-4730-ad26-66cfcf02fbf7,},Annotations:map[string]string{io
.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02,PodSandboxId:94f40322bbdd9657fd2efdf18efde4fbbf1a6d62dc86ab77bc6c16dfd9ef803e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307477227452075,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bff467f8-2skxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26ec681-9602-47cc-b716-2c2962830e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6,PodSandboxId:89c64fd0602426965e053d98b83af1a1b9017e9347efc2843663faa4ad9a32d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198
ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1692307452255469883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f5d6c5c055834308f061d510ac73d0,},Annotations:map[string]string{io.kubernetes.container.hash: 4007c40b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d,PodSandboxId:e049d2ecccae0bae3a6157d3079539b9e573884539500cdbecf42887690d369a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927ff
f648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1692307451171407826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd,PodSandboxId:c6a2e58644b6d37357fa769d1b009a421822ba983a14e6a041f7c87bc5688b72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665c
b8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1692307450935778042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3,PodSandboxId:8fc6ea684cec3aa59e82aaea2c4d99646ce760d7505e8ff18506e04adfca03b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf2
3f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1692307450716729703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=03be9e30-97d5-459b-9025-fe644ca4719e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.441952412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=076b79cf-c0a9-4380-8a24-e03deea82857 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.442167749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=076b79cf-c0a9-4380-8a24-e03deea82857 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.442495052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c024d3151a489a9c081c5540f09b11825a9507d22c4687d161b754688df0d1,PodSandboxId:9797c54df79e0a25e2ea401f5aa38f1ff3c73e7402653e21104baafb75b70b44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692307685570660514,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-nmfzr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6e810be-7dae-4a0c-a160-9cf04dae47f9,},Annotations:map[string]string{io.kubernetes.container.hash: a16f53b6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef542c2453e6ecdc796bfcde08601b72a2d17c6275721a0016bde82c2d5f1aa,PodSandboxId:4117cf4a9c036fedda86c2c4005b622d124a53cebcb70c4a2e7d52b4eb6ac614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692307547292096254,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3de40a2f-a52b-425b-923e-ec0ead15b733,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8afa7b1a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e5b917fc6ff93e14de7df922b16b90696854978f14314fce28e34b875cf42e,PodSandboxId:3a4218832883b541bfcb8e15de3fcb10314eebd3019520d962a566b242af8315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1692307529974522273,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w7b4x,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b301c03b-5458-448e-aefc-e835ec71037c,},Annotations:map[string]string{io.kubernetes.container.hash: 5daf2d05,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a2e178d8fb5058bd77e51f168025ae77e0c4428a5ee9dba7a2f38a2e3227603,PodSandboxId:53d1cdcf5bd9b0c14ef2dc5eda9db3867dd6c52c72aa7d2f970c51dbf2d9f280,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520965333776,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbtb5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e522f22-8526-4dd6-94b9-39b02e07254b,},Annotations:map[string]string{io.kubernetes.container.hash: c7880d78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2672440ccb00de8a0d633c07fc56d5b625021ee3518897c328b0d7c543bd4e13,PodSandboxId:69ef260dbfffb393dd7fe7f7b3295aa2cca76401c4c8bc1c880e3ec515c7c7f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520760059855,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jvfx2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 813e82b4-cc3d-479e-88a2-492e9cf83889,},Annotations:map[string]string{io.kubernetes.container.hash: 87eee358,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130,PodSandboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692307508508839080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f,PodSandboxId:d4e1bffafab34fc0f22e755113739d206d20c31cfb9ea2da329d8139dbbbd48e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1692307478268988558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8x7vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2e6c04-5685-453f-971e-6a2373e5989e,},Annotations:map[string]string{io.kubernetes.container.hash: ed0159ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8,PodSandboxId:8939a4e49b3d8f95689037ff7001023353d9272cd1a6a8cd3b73d18b95700d6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307478162333841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-l8j5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f260bd-18a1-4730-ad26-66cfcf02fbf7,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df6930cdfda7e6fa8e74e29640ecf736a6f183c1f3bfa57315459517365e461,PodS
andboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692307477490723668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02,PodSan
dboxId:94f40322bbdd9657fd2efdf18efde4fbbf1a6d62dc86ab77bc6c16dfd9ef803e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307477227452075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2skxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26ec681-9602-47cc-b716-2c2962830e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6,PodSandboxId:89c64fd0602426965e053d98b83af1a1b9017e9347efc2843663faa4ad9a32d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1692307452255469883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f5d6c5c055834308f061d510ac73d0,},Annotations:map[string]string{io.kubernetes.container.hash: 4007c40b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d,PodSandboxId:e049d2ecccae0bae3a6157d3079539b9e573884539500cdbecf42887690d369a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1692307451171407826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd,PodSandboxId:c6a2e58644b6d37357fa769d1b009a421822ba983a14e6a041f7c87bc5688b72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1692307450935778042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3,PodSandboxId:8fc6ea684cec3aa59e82aaea2c4d99646ce760d7505e8ff18506e04adfca03b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1692307450716729703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=076b79cf-c0a9-4380-8a24-e03deea82857 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.478663100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a855f2ba-4d2e-478d-96cc-b6c202dcb41a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.478730482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a855f2ba-4d2e-478d-96cc-b6c202dcb41a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.479095921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c024d3151a489a9c081c5540f09b11825a9507d22c4687d161b754688df0d1,PodSandboxId:9797c54df79e0a25e2ea401f5aa38f1ff3c73e7402653e21104baafb75b70b44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692307685570660514,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-nmfzr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6e810be-7dae-4a0c-a160-9cf04dae47f9,},Annotations:map[string]string{io.kubernetes.container.hash: a16f53b6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef542c2453e6ecdc796bfcde08601b72a2d17c6275721a0016bde82c2d5f1aa,PodSandboxId:4117cf4a9c036fedda86c2c4005b622d124a53cebcb70c4a2e7d52b4eb6ac614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692307547292096254,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3de40a2f-a52b-425b-923e-ec0ead15b733,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8afa7b1a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e5b917fc6ff93e14de7df922b16b90696854978f14314fce28e34b875cf42e,PodSandboxId:3a4218832883b541bfcb8e15de3fcb10314eebd3019520d962a566b242af8315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1692307529974522273,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w7b4x,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b301c03b-5458-448e-aefc-e835ec71037c,},Annotations:map[string]string{io.kubernetes.container.hash: 5daf2d05,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a2e178d8fb5058bd77e51f168025ae77e0c4428a5ee9dba7a2f38a2e3227603,PodSandboxId:53d1cdcf5bd9b0c14ef2dc5eda9db3867dd6c52c72aa7d2f970c51dbf2d9f280,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520965333776,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbtb5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e522f22-8526-4dd6-94b9-39b02e07254b,},Annotations:map[string]string{io.kubernetes.container.hash: c7880d78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2672440ccb00de8a0d633c07fc56d5b625021ee3518897c328b0d7c543bd4e13,PodSandboxId:69ef260dbfffb393dd7fe7f7b3295aa2cca76401c4c8bc1c880e3ec515c7c7f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520760059855,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jvfx2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 813e82b4-cc3d-479e-88a2-492e9cf83889,},Annotations:map[string]string{io.kubernetes.container.hash: 87eee358,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130,PodSandboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692307508508839080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f,PodSandboxId:d4e1bffafab34fc0f22e755113739d206d20c31cfb9ea2da329d8139dbbbd48e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1692307478268988558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8x7vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2e6c04-5685-453f-971e-6a2373e5989e,},Annotations:map[string]string{io.kubernetes.container.hash: ed0159ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8,PodSandboxId:8939a4e49b3d8f95689037ff7001023353d9272cd1a6a8cd3b73d18b95700d6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307478162333841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-l8j5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f260bd-18a1-4730-ad26-66cfcf02fbf7,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df6930cdfda7e6fa8e74e29640ecf736a6f183c1f3bfa57315459517365e461,PodS
andboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692307477490723668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02,PodSan
dboxId:94f40322bbdd9657fd2efdf18efde4fbbf1a6d62dc86ab77bc6c16dfd9ef803e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307477227452075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2skxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26ec681-9602-47cc-b716-2c2962830e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6,PodSandboxId:89c64fd0602426965e053d98b83af1a1b9017e9347efc2843663faa4ad9a32d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1692307452255469883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f5d6c5c055834308f061d510ac73d0,},Annotations:map[string]string{io.kubernetes.container.hash: 4007c40b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d,PodSandboxId:e049d2ecccae0bae3a6157d3079539b9e573884539500cdbecf42887690d369a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1692307451171407826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd,PodSandboxId:c6a2e58644b6d37357fa769d1b009a421822ba983a14e6a041f7c87bc5688b72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1692307450935778042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3,PodSandboxId:8fc6ea684cec3aa59e82aaea2c4d99646ce760d7505e8ff18506e04adfca03b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1692307450716729703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a855f2ba-4d2e-478d-96cc-b6c202dcb41a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.519995476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9d42eae7-a903-4cdc-950c-e3ff81957e79 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.520094728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9d42eae7-a903-4cdc-950c-e3ff81957e79 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.520488521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c024d3151a489a9c081c5540f09b11825a9507d22c4687d161b754688df0d1,PodSandboxId:9797c54df79e0a25e2ea401f5aa38f1ff3c73e7402653e21104baafb75b70b44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692307685570660514,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-nmfzr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6e810be-7dae-4a0c-a160-9cf04dae47f9,},Annotations:map[string]string{io.kubernetes.container.hash: a16f53b6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef542c2453e6ecdc796bfcde08601b72a2d17c6275721a0016bde82c2d5f1aa,PodSandboxId:4117cf4a9c036fedda86c2c4005b622d124a53cebcb70c4a2e7d52b4eb6ac614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692307547292096254,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3de40a2f-a52b-425b-923e-ec0ead15b733,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8afa7b1a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e5b917fc6ff93e14de7df922b16b90696854978f14314fce28e34b875cf42e,PodSandboxId:3a4218832883b541bfcb8e15de3fcb10314eebd3019520d962a566b242af8315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1692307529974522273,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w7b4x,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b301c03b-5458-448e-aefc-e835ec71037c,},Annotations:map[string]string{io.kubernetes.container.hash: 5daf2d05,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a2e178d8fb5058bd77e51f168025ae77e0c4428a5ee9dba7a2f38a2e3227603,PodSandboxId:53d1cdcf5bd9b0c14ef2dc5eda9db3867dd6c52c72aa7d2f970c51dbf2d9f280,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520965333776,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbtb5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e522f22-8526-4dd6-94b9-39b02e07254b,},Annotations:map[string]string{io.kubernetes.container.hash: c7880d78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2672440ccb00de8a0d633c07fc56d5b625021ee3518897c328b0d7c543bd4e13,PodSandboxId:69ef260dbfffb393dd7fe7f7b3295aa2cca76401c4c8bc1c880e3ec515c7c7f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520760059855,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jvfx2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 813e82b4-cc3d-479e-88a2-492e9cf83889,},Annotations:map[string]string{io.kubernetes.container.hash: 87eee358,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130,PodSandboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692307508508839080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f,PodSandboxId:d4e1bffafab34fc0f22e755113739d206d20c31cfb9ea2da329d8139dbbbd48e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1692307478268988558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8x7vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2e6c04-5685-453f-971e-6a2373e5989e,},Annotations:map[string]string{io.kubernetes.container.hash: ed0159ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8,PodSandboxId:8939a4e49b3d8f95689037ff7001023353d9272cd1a6a8cd3b73d18b95700d6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307478162333841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-l8j5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f260bd-18a1-4730-ad26-66cfcf02fbf7,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df6930cdfda7e6fa8e74e29640ecf736a6f183c1f3bfa57315459517365e461,PodS
andboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692307477490723668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02,PodSan
dboxId:94f40322bbdd9657fd2efdf18efde4fbbf1a6d62dc86ab77bc6c16dfd9ef803e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307477227452075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2skxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26ec681-9602-47cc-b716-2c2962830e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6,PodSandboxId:89c64fd0602426965e053d98b83af1a1b9017e9347efc2843663faa4ad9a32d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1692307452255469883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f5d6c5c055834308f061d510ac73d0,},Annotations:map[string]string{io.kubernetes.container.hash: 4007c40b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d,PodSandboxId:e049d2ecccae0bae3a6157d3079539b9e573884539500cdbecf42887690d369a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1692307451171407826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd,PodSandboxId:c6a2e58644b6d37357fa769d1b009a421822ba983a14e6a041f7c87bc5688b72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1692307450935778042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3,PodSandboxId:8fc6ea684cec3aa59e82aaea2c4d99646ce760d7505e8ff18506e04adfca03b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1692307450716729703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9d42eae7-a903-4cdc-950c-e3ff81957e79 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.561730627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e5136a88-68fb-4b6b-a0ad-a2d48d5cd45c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.561795530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e5136a88-68fb-4b6b-a0ad-a2d48d5cd45c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.562203177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c024d3151a489a9c081c5540f09b11825a9507d22c4687d161b754688df0d1,PodSandboxId:9797c54df79e0a25e2ea401f5aa38f1ff3c73e7402653e21104baafb75b70b44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692307685570660514,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-nmfzr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6e810be-7dae-4a0c-a160-9cf04dae47f9,},Annotations:map[string]string{io.kubernetes.container.hash: a16f53b6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef542c2453e6ecdc796bfcde08601b72a2d17c6275721a0016bde82c2d5f1aa,PodSandboxId:4117cf4a9c036fedda86c2c4005b622d124a53cebcb70c4a2e7d52b4eb6ac614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692307547292096254,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3de40a2f-a52b-425b-923e-ec0ead15b733,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8afa7b1a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e5b917fc6ff93e14de7df922b16b90696854978f14314fce28e34b875cf42e,PodSandboxId:3a4218832883b541bfcb8e15de3fcb10314eebd3019520d962a566b242af8315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1692307529974522273,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w7b4x,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b301c03b-5458-448e-aefc-e835ec71037c,},Annotations:map[string]string{io.kubernetes.container.hash: 5daf2d05,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a2e178d8fb5058bd77e51f168025ae77e0c4428a5ee9dba7a2f38a2e3227603,PodSandboxId:53d1cdcf5bd9b0c14ef2dc5eda9db3867dd6c52c72aa7d2f970c51dbf2d9f280,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520965333776,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbtb5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e522f22-8526-4dd6-94b9-39b02e07254b,},Annotations:map[string]string{io.kubernetes.container.hash: c7880d78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2672440ccb00de8a0d633c07fc56d5b625021ee3518897c328b0d7c543bd4e13,PodSandboxId:69ef260dbfffb393dd7fe7f7b3295aa2cca76401c4c8bc1c880e3ec515c7c7f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520760059855,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jvfx2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 813e82b4-cc3d-479e-88a2-492e9cf83889,},Annotations:map[string]string{io.kubernetes.container.hash: 87eee358,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130,PodSandboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692307508508839080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f,PodSandboxId:d4e1bffafab34fc0f22e755113739d206d20c31cfb9ea2da329d8139dbbbd48e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1692307478268988558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8x7vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2e6c04-5685-453f-971e-6a2373e5989e,},Annotations:map[string]string{io.kubernetes.container.hash: ed0159ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8,PodSandboxId:8939a4e49b3d8f95689037ff7001023353d9272cd1a6a8cd3b73d18b95700d6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307478162333841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-l8j5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f260bd-18a1-4730-ad26-66cfcf02fbf7,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df6930cdfda7e6fa8e74e29640ecf736a6f183c1f3bfa57315459517365e461,PodS
andboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692307477490723668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02,PodSan
dboxId:94f40322bbdd9657fd2efdf18efde4fbbf1a6d62dc86ab77bc6c16dfd9ef803e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307477227452075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2skxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26ec681-9602-47cc-b716-2c2962830e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6,PodSandboxId:89c64fd0602426965e053d98b83af1a1b9017e9347efc2843663faa4ad9a32d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1692307452255469883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f5d6c5c055834308f061d510ac73d0,},Annotations:map[string]string{io.kubernetes.container.hash: 4007c40b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d,PodSandboxId:e049d2ecccae0bae3a6157d3079539b9e573884539500cdbecf42887690d369a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1692307451171407826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd,PodSandboxId:c6a2e58644b6d37357fa769d1b009a421822ba983a14e6a041f7c87bc5688b72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1692307450935778042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3,PodSandboxId:8fc6ea684cec3aa59e82aaea2c4d99646ce760d7505e8ff18506e04adfca03b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1692307450716729703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e5136a88-68fb-4b6b-a0ad-a2d48d5cd45c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.595295207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=162cbf50-9647-42da-9ed8-3e19b3b81db7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.595402004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=162cbf50-9647-42da-9ed8-3e19b3b81db7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.595709192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c024d3151a489a9c081c5540f09b11825a9507d22c4687d161b754688df0d1,PodSandboxId:9797c54df79e0a25e2ea401f5aa38f1ff3c73e7402653e21104baafb75b70b44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692307685570660514,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-nmfzr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6e810be-7dae-4a0c-a160-9cf04dae47f9,},Annotations:map[string]string{io.kubernetes.container.hash: a16f53b6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef542c2453e6ecdc796bfcde08601b72a2d17c6275721a0016bde82c2d5f1aa,PodSandboxId:4117cf4a9c036fedda86c2c4005b622d124a53cebcb70c4a2e7d52b4eb6ac614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692307547292096254,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3de40a2f-a52b-425b-923e-ec0ead15b733,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8afa7b1a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e5b917fc6ff93e14de7df922b16b90696854978f14314fce28e34b875cf42e,PodSandboxId:3a4218832883b541bfcb8e15de3fcb10314eebd3019520d962a566b242af8315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1692307529974522273,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w7b4x,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b301c03b-5458-448e-aefc-e835ec71037c,},Annotations:map[string]string{io.kubernetes.container.hash: 5daf2d05,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a2e178d8fb5058bd77e51f168025ae77e0c4428a5ee9dba7a2f38a2e3227603,PodSandboxId:53d1cdcf5bd9b0c14ef2dc5eda9db3867dd6c52c72aa7d2f970c51dbf2d9f280,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520965333776,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbtb5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e522f22-8526-4dd6-94b9-39b02e07254b,},Annotations:map[string]string{io.kubernetes.container.hash: c7880d78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2672440ccb00de8a0d633c07fc56d5b625021ee3518897c328b0d7c543bd4e13,PodSandboxId:69ef260dbfffb393dd7fe7f7b3295aa2cca76401c4c8bc1c880e3ec515c7c7f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520760059855,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jvfx2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 813e82b4-cc3d-479e-88a2-492e9cf83889,},Annotations:map[string]string{io.kubernetes.container.hash: 87eee358,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130,PodSandboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692307508508839080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f,PodSandboxId:d4e1bffafab34fc0f22e755113739d206d20c31cfb9ea2da329d8139dbbbd48e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1692307478268988558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8x7vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2e6c04-5685-453f-971e-6a2373e5989e,},Annotations:map[string]string{io.kubernetes.container.hash: ed0159ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8,PodSandboxId:8939a4e49b3d8f95689037ff7001023353d9272cd1a6a8cd3b73d18b95700d6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307478162333841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-l8j5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f260bd-18a1-4730-ad26-66cfcf02fbf7,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df6930cdfda7e6fa8e74e29640ecf736a6f183c1f3bfa57315459517365e461,PodS
andboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692307477490723668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02,PodSan
dboxId:94f40322bbdd9657fd2efdf18efde4fbbf1a6d62dc86ab77bc6c16dfd9ef803e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307477227452075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2skxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26ec681-9602-47cc-b716-2c2962830e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6,PodSandboxId:89c64fd0602426965e053d98b83af1a1b9017e9347efc2843663faa4ad9a32d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1692307452255469883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f5d6c5c055834308f061d510ac73d0,},Annotations:map[string]string{io.kubernetes.container.hash: 4007c40b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d,PodSandboxId:e049d2ecccae0bae3a6157d3079539b9e573884539500cdbecf42887690d369a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1692307451171407826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd,PodSandboxId:c6a2e58644b6d37357fa769d1b009a421822ba983a14e6a041f7c87bc5688b72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1692307450935778042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3,PodSandboxId:8fc6ea684cec3aa59e82aaea2c4d99646ce760d7505e8ff18506e04adfca03b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1692307450716729703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=162cbf50-9647-42da-9ed8-3e19b3b81db7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.627468984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ffbdfabf-27ec-4d9c-8ce3-2eb1ca859f09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.627568364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ffbdfabf-27ec-4d9c-8ce3-2eb1ca859f09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.627908362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c024d3151a489a9c081c5540f09b11825a9507d22c4687d161b754688df0d1,PodSandboxId:9797c54df79e0a25e2ea401f5aa38f1ff3c73e7402653e21104baafb75b70b44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692307685570660514,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-nmfzr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6e810be-7dae-4a0c-a160-9cf04dae47f9,},Annotations:map[string]string{io.kubernetes.container.hash: a16f53b6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef542c2453e6ecdc796bfcde08601b72a2d17c6275721a0016bde82c2d5f1aa,PodSandboxId:4117cf4a9c036fedda86c2c4005b622d124a53cebcb70c4a2e7d52b4eb6ac614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692307547292096254,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3de40a2f-a52b-425b-923e-ec0ead15b733,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8afa7b1a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e5b917fc6ff93e14de7df922b16b90696854978f14314fce28e34b875cf42e,PodSandboxId:3a4218832883b541bfcb8e15de3fcb10314eebd3019520d962a566b242af8315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1692307529974522273,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w7b4x,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b301c03b-5458-448e-aefc-e835ec71037c,},Annotations:map[string]string{io.kubernetes.container.hash: 5daf2d05,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a2e178d8fb5058bd77e51f168025ae77e0c4428a5ee9dba7a2f38a2e3227603,PodSandboxId:53d1cdcf5bd9b0c14ef2dc5eda9db3867dd6c52c72aa7d2f970c51dbf2d9f280,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520965333776,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbtb5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e522f22-8526-4dd6-94b9-39b02e07254b,},Annotations:map[string]string{io.kubernetes.container.hash: c7880d78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2672440ccb00de8a0d633c07fc56d5b625021ee3518897c328b0d7c543bd4e13,PodSandboxId:69ef260dbfffb393dd7fe7f7b3295aa2cca76401c4c8bc1c880e3ec515c7c7f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520760059855,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jvfx2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 813e82b4-cc3d-479e-88a2-492e9cf83889,},Annotations:map[string]string{io.kubernetes.container.hash: 87eee358,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130,PodSandboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692307508508839080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f,PodSandboxId:d4e1bffafab34fc0f22e755113739d206d20c31cfb9ea2da329d8139dbbbd48e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1692307478268988558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8x7vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2e6c04-5685-453f-971e-6a2373e5989e,},Annotations:map[string]string{io.kubernetes.container.hash: ed0159ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8,PodSandboxId:8939a4e49b3d8f95689037ff7001023353d9272cd1a6a8cd3b73d18b95700d6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307478162333841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-l8j5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f260bd-18a1-4730-ad26-66cfcf02fbf7,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df6930cdfda7e6fa8e74e29640ecf736a6f183c1f3bfa57315459517365e461,PodS
andboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692307477490723668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02,PodSan
dboxId:94f40322bbdd9657fd2efdf18efde4fbbf1a6d62dc86ab77bc6c16dfd9ef803e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307477227452075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2skxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26ec681-9602-47cc-b716-2c2962830e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6,PodSandboxId:89c64fd0602426965e053d98b83af1a1b9017e9347efc2843663faa4ad9a32d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1692307452255469883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f5d6c5c055834308f061d510ac73d0,},Annotations:map[string]string{io.kubernetes.container.hash: 4007c40b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d,PodSandboxId:e049d2ecccae0bae3a6157d3079539b9e573884539500cdbecf42887690d369a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1692307451171407826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd,PodSandboxId:c6a2e58644b6d37357fa769d1b009a421822ba983a14e6a041f7c87bc5688b72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1692307450935778042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3,PodSandboxId:8fc6ea684cec3aa59e82aaea2c4d99646ce760d7505e8ff18506e04adfca03b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1692307450716729703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ffbdfabf-27ec-4d9c-8ce3-2eb1ca859f09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.667597027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46e8d387-e651-4756-bad8-921195ccc58c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.667692070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46e8d387-e651-4756-bad8-921195ccc58c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.668018570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c024d3151a489a9c081c5540f09b11825a9507d22c4687d161b754688df0d1,PodSandboxId:9797c54df79e0a25e2ea401f5aa38f1ff3c73e7402653e21104baafb75b70b44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692307685570660514,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-nmfzr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6e810be-7dae-4a0c-a160-9cf04dae47f9,},Annotations:map[string]string{io.kubernetes.container.hash: a16f53b6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef542c2453e6ecdc796bfcde08601b72a2d17c6275721a0016bde82c2d5f1aa,PodSandboxId:4117cf4a9c036fedda86c2c4005b622d124a53cebcb70c4a2e7d52b4eb6ac614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692307547292096254,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3de40a2f-a52b-425b-923e-ec0ead15b733,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8afa7b1a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e5b917fc6ff93e14de7df922b16b90696854978f14314fce28e34b875cf42e,PodSandboxId:3a4218832883b541bfcb8e15de3fcb10314eebd3019520d962a566b242af8315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1692307529974522273,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w7b4x,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b301c03b-5458-448e-aefc-e835ec71037c,},Annotations:map[string]string{io.kubernetes.container.hash: 5daf2d05,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a2e178d8fb5058bd77e51f168025ae77e0c4428a5ee9dba7a2f38a2e3227603,PodSandboxId:53d1cdcf5bd9b0c14ef2dc5eda9db3867dd6c52c72aa7d2f970c51dbf2d9f280,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520965333776,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbtb5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e522f22-8526-4dd6-94b9-39b02e07254b,},Annotations:map[string]string{io.kubernetes.container.hash: c7880d78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2672440ccb00de8a0d633c07fc56d5b625021ee3518897c328b0d7c543bd4e13,PodSandboxId:69ef260dbfffb393dd7fe7f7b3295aa2cca76401c4c8bc1c880e3ec515c7c7f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520760059855,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jvfx2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 813e82b4-cc3d-479e-88a2-492e9cf83889,},Annotations:map[string]string{io.kubernetes.container.hash: 87eee358,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130,PodSandboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692307508508839080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f,PodSandboxId:d4e1bffafab34fc0f22e755113739d206d20c31cfb9ea2da329d8139dbbbd48e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1692307478268988558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8x7vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2e6c04-5685-453f-971e-6a2373e5989e,},Annotations:map[string]string{io.kubernetes.container.hash: ed0159ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8,PodSandboxId:8939a4e49b3d8f95689037ff7001023353d9272cd1a6a8cd3b73d18b95700d6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307478162333841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-l8j5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f260bd-18a1-4730-ad26-66cfcf02fbf7,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df6930cdfda7e6fa8e74e29640ecf736a6f183c1f3bfa57315459517365e461,PodS
andboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692307477490723668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02,PodSan
dboxId:94f40322bbdd9657fd2efdf18efde4fbbf1a6d62dc86ab77bc6c16dfd9ef803e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307477227452075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2skxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26ec681-9602-47cc-b716-2c2962830e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6,PodSandboxId:89c64fd0602426965e053d98b83af1a1b9017e9347efc2843663faa4ad9a32d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1692307452255469883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f5d6c5c055834308f061d510ac73d0,},Annotations:map[string]string{io.kubernetes.container.hash: 4007c40b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d,PodSandboxId:e049d2ecccae0bae3a6157d3079539b9e573884539500cdbecf42887690d369a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1692307451171407826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd,PodSandboxId:c6a2e58644b6d37357fa769d1b009a421822ba983a14e6a041f7c87bc5688b72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1692307450935778042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3,PodSandboxId:8fc6ea684cec3aa59e82aaea2c4d99646ce760d7505e8ff18506e04adfca03b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1692307450716729703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46e8d387-e651-4756-bad8-921195ccc58c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.699697025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ed634ee9-a2f9-4e03-be08-7fd70a6e4f56 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.699809493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ed634ee9-a2f9-4e03-be08-7fd70a6e4f56 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:28:18 ingress-addon-legacy-449686 crio[719]: time="2023-08-17 21:28:18.700890683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c024d3151a489a9c081c5540f09b11825a9507d22c4687d161b754688df0d1,PodSandboxId:9797c54df79e0a25e2ea401f5aa38f1ff3c73e7402653e21104baafb75b70b44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1692307685570660514,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-nmfzr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6e810be-7dae-4a0c-a160-9cf04dae47f9,},Annotations:map[string]string{io.kubernetes.container.hash: a16f53b6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef542c2453e6ecdc796bfcde08601b72a2d17c6275721a0016bde82c2d5f1aa,PodSandboxId:4117cf4a9c036fedda86c2c4005b622d124a53cebcb70c4a2e7d52b4eb6ac614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a,State:CONTAINER_RUNNING,CreatedAt:1692307547292096254,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3de40a2f-a52b-425b-923e-ec0ead15b733,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8afa7b1a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e5b917fc6ff93e14de7df922b16b90696854978f14314fce28e34b875cf42e,PodSandboxId:3a4218832883b541bfcb8e15de3fcb10314eebd3019520d962a566b242af8315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1692307529974522273,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w7b4x,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b301c03b-5458-448e-aefc-e835ec71037c,},Annotations:map[string]string{io.kubernetes.container.hash: 5daf2d05,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a2e178d8fb5058bd77e51f168025ae77e0c4428a5ee9dba7a2f38a2e3227603,PodSandboxId:53d1cdcf5bd9b0c14ef2dc5eda9db3867dd6c52c72aa7d2f970c51dbf2d9f280,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520965333776,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbtb5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e522f22-8526-4dd6-94b9-39b02e07254b,},Annotations:map[string]string{io.kubernetes.container.hash: c7880d78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2672440ccb00de8a0d633c07fc56d5b625021ee3518897c328b0d7c543bd4e13,PodSandboxId:69ef260dbfffb393dd7fe7f7b3295aa2cca76401c4c8bc1c880e3ec515c7c7f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1692307520760059855,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jvfx2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 813e82b4-cc3d-479e-88a2-492e9cf83889,},Annotations:map[string]string{io.kubernetes.container.hash: 87eee358,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130,PodSandboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692307508508839080,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f,PodSandboxId:d4e1bffafab34fc0f22e755113739d206d20c31cfb9ea2da329d8139dbbbd48e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1692307478268988558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8x7vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2e6c04-5685-453f-971e-6a2373e5989e,},Annotations:map[string]string{io.kubernetes.container.hash: ed0159ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8,PodSandboxId:8939a4e49b3d8f95689037ff7001023353d9272cd1a6a8cd3b73d18b95700d6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307478162333841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-l8j5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f260bd-18a1-4730-ad26-66cfcf02fbf7,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df6930cdfda7e6fa8e74e29640ecf736a6f183c1f3bfa57315459517365e461,PodS
andboxId:7ad987557e4c2749715f49263561c96aa6f93d7a3f7e38e2ef00401eaca73bb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692307477490723668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5da71a6-92ed-4913-9d05-f84eef964002,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6847b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02,PodSan
dboxId:94f40322bbdd9657fd2efdf18efde4fbbf1a6d62dc86ab77bc6c16dfd9ef803e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1692307477227452075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2skxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26ec681-9602-47cc-b716-2c2962830e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b5c9cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6,PodSandboxId:89c64fd0602426965e053d98b83af1a1b9017e9347efc2843663faa4ad9a32d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1692307452255469883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51f5d6c5c055834308f061d510ac73d0,},Annotations:map[string]string{io.kubernetes.container.hash: 4007c40b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d,PodSandboxId:e049d2ecccae0bae3a6157d3079539b9e573884539500cdbecf42887690d369a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1692307451171407826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd,PodSandboxId:c6a2e58644b6d37357fa769d1b009a421822ba983a14e6a041f7c87bc5688b72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1692307450935778042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3,PodSandboxId:8fc6ea684cec3aa59e82aaea2c4d99646ce760d7505e8ff18506e04adfca03b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1692307450716729703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-449686,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ed634ee9-a2f9-4e03-be08-7fd70a6e4f56 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	c1c024d3151a4       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            13 seconds ago      Running             hello-world-app           0                   9797c54df79e0
	9ef542c2453e6       docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a                    2 minutes ago       Running             nginx                     0                   4117cf4a9c036
	24e5b917fc6ff       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   3a4218832883b
	9a2e178d8fb50       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   53d1cdcf5bd9b
	2672440ccb00d       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   69ef260dbfffb
	aab563ec5373b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       1                   7ad987557e4c2
	8beb75fe4da8b       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   d4e1bffafab34
	568b624a84f03       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   8939a4e49b3d8
	4df6930cdfda7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Exited              storage-provisioner       0                   7ad987557e4c2
	5decf261faa0a       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   94f40322bbdd9
	36f877eb4b4f5       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   89c64fd060242
	575b7fc0bf682       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   e049d2ecccae0
	d6ca898c218bb       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   c6a2e58644b6d
	19c809a48e5ef       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   8fc6ea684cec3
	
	* 
	* ==> coredns [568b624a84f032ecdbce3e098117f51304be7dde94f7bb5f9bc4df2aa6fbdbe8] <==
	* E0817 21:25:08.360305       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0817 21:25:08.360283       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-08-17 21:24:38.359684697 +0000 UTC m=+0.031237761) (total time: 30.000580683s):
	Trace[939984059]: [30.000580683s] [30.000580683s] END
	E0817 21:25:08.360330       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.6:33260 - 51017 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.001082835s
	[INFO] 10.244.0.6:45619 - 20297 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00164635s
	[INFO] 10.244.0.6:33260 - 12120 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0002324s
	[INFO] 10.244.0.6:45619 - 45910 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000097466s
	[INFO] 10.244.0.6:33260 - 41005 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000128367s
	[INFO] 10.244.0.6:45619 - 11227 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072842s
	[INFO] 10.244.0.6:33260 - 31168 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000120173s
	[INFO] 10.244.0.6:33260 - 10512 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000236974s
	[INFO] 10.244.0.6:45619 - 43967 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00009985s
	[INFO] 10.244.0.6:45619 - 18081 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037467s
	[INFO] 10.244.0.6:33260 - 46794 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073867s
	[INFO] 10.244.0.6:45619 - 5811 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036639s
	[INFO] 10.244.0.6:33260 - 44947 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000103796s
	[INFO] 10.244.0.6:45619 - 40377 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000204443s
	[INFO] 10.244.0.6:51823 - 64683 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00015864s
	[INFO] 10.244.0.6:51823 - 62710 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000052163s
	[INFO] 10.244.0.6:51823 - 31215 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074284s
	[INFO] 10.244.0.6:51823 - 21824 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035304s
	[INFO] 10.244.0.6:51823 - 42602 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085231s
	[INFO] 10.244.0.6:51823 - 18776 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058833s
	[INFO] 10.244.0.6:51823 - 32896 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041389s
	
	* 
	* ==> coredns [5decf261faa0adf939956b9e8de4e2c291440fafeaf8d8bede9e6c86d929ea02] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 6dca4351036a5cca7eefa7c93a3dea30
	CoreDNS-1.6.7
	linux/amd64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:55527 - 37428 "HINFO IN 2602137980331660119.5410120438600553666. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011048879s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.6:41965 - 23941 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000381756s
	[INFO] 10.244.0.6:41965 - 30050 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000103557s
	[INFO] 10.244.0.6:41965 - 31568 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000251702s
	[INFO] 10.244.0.6:41965 - 64062 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073719s
	[INFO] 10.244.0.6:41965 - 62676 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000522507s
	[INFO] 10.244.0.6:41965 - 46370 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000443887s
	[INFO] 10.244.0.6:41965 - 60691 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000128022s
	I0817 21:25:07.378267       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-08-17 21:24:37.377430774 +0000 UTC m=+0.033401325) (total time: 30.000612074s):
	Trace[2019727887]: [30.000612074s] [30.000612074s] END
	E0817 21:25:07.378373       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0817 21:25:07.379827       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-08-17 21:24:37.379401608 +0000 UTC m=+0.035372174) (total time: 30.000408332s):
	Trace[1427131847]: [30.000408332s] [30.000408332s] END
	E0817 21:25:07.379878       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0817 21:25:07.380062       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-08-17 21:24:37.379348133 +0000 UTC m=+0.035318700) (total time: 30.000702644s):
	Trace[939984059]: [30.000702644s] [30.000702644s] END
	E0817 21:25:07.380091       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-449686
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-449686
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=ingress-addon-legacy-449686
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_24_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:24:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-449686
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:28:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:25:50 +0000   Thu, 17 Aug 2023 21:24:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:25:50 +0000   Thu, 17 Aug 2023 21:24:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:25:50 +0000   Thu, 17 Aug 2023 21:24:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:25:50 +0000   Thu, 17 Aug 2023 21:24:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ingress-addon-legacy-449686
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d5e8314334f4260955db3af87258733
	  System UUID:                7d5e8314-334f-4260-955d-b3af87258733
	  Boot ID:                    dee79389-e72a-4e9b-8a9a-4e1c589dfb0a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-nmfzr                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 coredns-66bff467f8-2skxw                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 coredns-66bff467f8-l8j5l                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-449686                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-apiserver-ingress-addon-legacy-449686             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-449686    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-proxy-8x7vz                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-449686             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             140Mi (3%!)(MISSING)  340Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m10s (x4 over 4m10s)  kubelet     Node ingress-addon-legacy-449686 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x4 over 4m10s)  kubelet     Node ingress-addon-legacy-449686 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x4 over 4m10s)  kubelet     Node ingress-addon-legacy-449686 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s                  kubelet     Node ingress-addon-legacy-449686 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s                  kubelet     Node ingress-addon-legacy-449686 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s                  kubelet     Node ingress-addon-legacy-449686 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m49s                  kubelet     Node ingress-addon-legacy-449686 status is now: NodeReady
	  Normal  Starting                 3m41s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug17 21:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.102392] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.399715] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.614500] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156368] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.077973] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.514364] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.107755] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.151950] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.107199] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.218410] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[Aug17 21:24] systemd-fstab-generator[1036]: Ignoring "noauto" for root device
	[  +3.344411] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +10.203598] systemd-fstab-generator[1443]: Ignoring "noauto" for root device
	[ +17.534716] kauditd_printk_skb: 6 callbacks suppressed
	[Aug17 21:25] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.618688] kauditd_printk_skb: 8 callbacks suppressed
	[ +22.009702] kauditd_printk_skb: 7 callbacks suppressed
	[Aug17 21:28] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.523453] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [36f877eb4b4f54a34b3af069bffb7b2eafeeead1fade7969d707dc8369f9b4f6] <==
	* 2023-08-17 21:24:12.472602 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-17 21:24:12.476040 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-17 21:24:12.476371 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-17 21:24:12.476749 I | embed: listening for peers on 192.168.39.250:2380
	2023-08-17 21:24:12.477214 I | etcdserver: a69e859ffe38fcde as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/17 21:24:12 INFO: a69e859ffe38fcde switched to configuration voters=(12006180578827762910)
	2023-08-17 21:24:12.477833 I | etcdserver/membership: added member a69e859ffe38fcde [https://192.168.39.250:2380] to cluster f7a04275a0bf31
	raft2023/08/17 21:24:13 INFO: a69e859ffe38fcde is starting a new election at term 1
	raft2023/08/17 21:24:13 INFO: a69e859ffe38fcde became candidate at term 2
	raft2023/08/17 21:24:13 INFO: a69e859ffe38fcde received MsgVoteResp from a69e859ffe38fcde at term 2
	raft2023/08/17 21:24:13 INFO: a69e859ffe38fcde became leader at term 2
	raft2023/08/17 21:24:13 INFO: raft.node: a69e859ffe38fcde elected leader a69e859ffe38fcde at term 2
	2023-08-17 21:24:13.354605 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-17 21:24:13.356234 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-17 21:24:13.356283 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-17 21:24:13.356305 I | etcdserver: published {Name:ingress-addon-legacy-449686 ClientURLs:[https://192.168.39.250:2379]} to cluster f7a04275a0bf31
	2023-08-17 21:24:13.356635 I | embed: ready to serve client requests
	2023-08-17 21:24:13.356823 I | embed: ready to serve client requests
	2023-08-17 21:24:13.357830 I | embed: serving client requests on 192.168.39.250:2379
	2023-08-17 21:24:13.358563 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-17 21:24:35.374699 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:1 size:210" took too long (486.253615ms) to execute
	2023-08-17 21:24:35.375292 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (315.629865ms) to execute
	2023-08-17 21:25:27.149767 W | etcdserver: read-only range request "key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" " with result "range_response_count:3 size:13726" took too long (108.083091ms) to execute
	2023-08-17 21:25:35.444010 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true " with result "range_response_count:0 size:7" took too long (124.681867ms) to execute
	2023-08-17 21:25:45.826383 W | etcdserver: read-only range request "key:\"/registry/events/ingress-nginx/ingress-nginx-controller-7fcf777cb7-w7b4x.177c48bbea4a496a\" " with result "range_response_count:1 size:839" took too long (120.834213ms) to execute
	
	* 
	* ==> kernel <==
	*  21:28:19 up 4 min,  0 users,  load average: 0.38, 0.48, 0.23
	Linux ingress-addon-legacy-449686 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [19c809a48e5ef4876b3ea7a508a08f81cf6b7b853f41dee43ad3ef03bf6440a3] <==
	* I0817 21:24:16.365811       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0817 21:24:16.396479       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.250, ResourceVersion: 0, AdditionalErrorMsg: 
	I0817 21:24:16.434450       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0817 21:24:16.434543       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 21:24:16.434564       1 cache.go:39] Caches are synced for autoregister controller
	I0817 21:24:16.434841       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 21:24:16.467867       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0817 21:24:17.330700       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 21:24:17.330878       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 21:24:17.338645       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0817 21:24:17.346003       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0817 21:24:17.346042       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0817 21:24:17.914196       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 21:24:17.972791       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 21:24:18.043300       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.250]
	I0817 21:24:18.044284       1 controller.go:609] quota admission added evaluator for: endpoints
	I0817 21:24:18.048552       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 21:24:18.667780       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0817 21:24:19.732030       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0817 21:24:19.793558       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0817 21:24:20.160890       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 21:24:35.518053       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0817 21:24:35.894731       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0817 21:25:18.282257       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0817 21:25:44.053435       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [d6ca898c218bb57a6615bb08c70ae6124d1ca43e2ade817d8b6a9e080ac478cd] <==
	* I0817 21:24:35.877619       1 shared_informer.go:230] Caches are synced for attach detach 
	I0817 21:24:35.889732       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0817 21:24:35.910027       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"6ef51dc4-ac31-4075-bf69-b84694d360a2", APIVersion:"apps/v1", ResourceVersion:"216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-8x7vz
	I0817 21:24:36.032203       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0817 21:24:36.037236       1 shared_informer.go:230] Caches are synced for resource quota 
	I0817 21:24:36.078174       1 shared_informer.go:230] Caches are synced for taint 
	I0817 21:24:36.078243       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0817 21:24:36.078374       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-449686. Assuming now as a timestamp.
	I0817 21:24:36.078414       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0817 21:24:36.078658       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0817 21:24:36.078942       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-449686", UID:"88e7cd44-99e8-4024-b9c6-babf9e327822", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-449686 event: Registered Node ingress-addon-legacy-449686 in Controller
	I0817 21:24:36.080908       1 shared_informer.go:230] Caches are synced for resource quota 
	E0817 21:24:36.113355       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"6ef51dc4-ac31-4075-bf69-b84694d360a2", ResourceVersion:"216", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63827904259, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001ad2d40), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc001ad2d60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001ad2d80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001a57100), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc001ad2da0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ad2dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001ad2e00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001ab0820), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ade568), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003dccb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0012286f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001ade5b8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0817 21:24:36.124781       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0817 21:24:36.124822       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 21:25:18.259697       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b45cb24e-e237-4b3a-8793-c17a1bada27c", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0817 21:25:18.285270       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"a2e8a555-d0e9-435e-92f0-3e7c5cd7eab5", APIVersion:"apps/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-w7b4x
	I0817 21:25:18.352939       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"28741b86-aaf0-4558-b423-c661bdd2ee31", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-jvfx2
	I0817 21:25:18.436550       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"7f1ead61-ddf3-4b62-acf9-88e6279a6c23", APIVersion:"batch/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-hbtb5
	I0817 21:25:21.790545       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"28741b86-aaf0-4558-b423-c661bdd2ee31", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0817 21:25:21.857346       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"7f1ead61-ddf3-4b62-acf9-88e6279a6c23", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0817 21:28:03.174304       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"fe31fc49-67da-4307-b8d5-867dab99fe28", APIVersion:"apps/v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0817 21:28:03.208722       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"4e120256-d8c6-4211-bceb-d4a747d2e1b0", APIVersion:"apps/v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-nmfzr
	E0817 21:28:15.860561       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-82xfp" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [8beb75fe4da8b0b9fc78de0a2ee8dfa24c8badf5f9dadd2147eaaf0a3609f28f] <==
	* W0817 21:24:38.520038       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0817 21:24:38.529225       1 node.go:136] Successfully retrieved node IP: 192.168.39.250
	I0817 21:24:38.529277       1 server_others.go:186] Using iptables Proxier.
	I0817 21:24:38.529815       1 server.go:583] Version: v1.18.20
	I0817 21:24:38.532794       1 config.go:315] Starting service config controller
	I0817 21:24:38.532838       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0817 21:24:38.532856       1 config.go:133] Starting endpoints config controller
	I0817 21:24:38.532864       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0817 21:24:38.633467       1 shared_informer.go:230] Caches are synced for service config 
	I0817 21:24:38.633483       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [575b7fc0bf68237f275d8bd31f06c1e0bf6d60a73f57d8a2444e1a416d760b3d] <==
	* I0817 21:24:16.431843       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 21:24:16.431763       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 21:24:16.437850       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 21:24:16.437955       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 21:24:16.438044       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 21:24:16.438195       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 21:24:16.438288       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:24:16.438360       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 21:24:16.438443       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 21:24:16.441397       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 21:24:16.441576       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 21:24:16.441670       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 21:24:16.441835       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 21:24:16.442012       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 21:24:17.258352       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 21:24:17.295809       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 21:24:17.322073       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 21:24:17.449645       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:24:17.511819       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 21:24:17.515008       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 21:24:17.537531       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 21:24:17.682429       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0817 21:24:19.532008       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0817 21:24:35.659031       1 factory.go:503] pod: kube-system/coredns-66bff467f8-2skxw is already present in unschedulable queue
	E0817 21:24:35.704633       1 factory.go:503] pod: kube-system/coredns-66bff467f8-l8j5l is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 21:23:40 UTC, ends at Thu 2023-08-17 21:28:19 UTC. --
	Aug 17 21:25:22 ingress-addon-legacy-449686 kubelet[1450]: W0817 21:25:22.770567    1450 pod_container_deletor.go:77] Container "69ef260dbfffb393dd7fe7f7b3295aa2cca76401c4c8bc1c880e3ec515c7c7f8" not found in pod's containers
	Aug 17 21:25:22 ingress-addon-legacy-449686 kubelet[1450]: W0817 21:25:22.773634    1450 pod_container_deletor.go:77] Container "53d1cdcf5bd9b0c14ef2dc5eda9db3867dd6c52c72aa7d2f970c51dbf2d9f280" not found in pod's containers
	Aug 17 21:25:31 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:25:31.644863    1450 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Aug 17 21:25:31 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:25:31.744662    1450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-trm56" (UniqueName: "kubernetes.io/secret/65dbf5f8-5af0-4837-99a9-5c58a14d7d99-minikube-ingress-dns-token-trm56") pod "kube-ingress-dns-minikube" (UID: "65dbf5f8-5af0-4837-99a9-5c58a14d7d99")
	Aug 17 21:25:44 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:25:44.237331    1450 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Aug 17 21:25:44 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:25:44.389566    1450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-6ln26" (UniqueName: "kubernetes.io/secret/3de40a2f-a52b-425b-923e-ec0ead15b733-default-token-6ln26") pod "nginx" (UID: "3de40a2f-a52b-425b-923e-ec0ead15b733")
	Aug 17 21:28:03 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:03.234098    1450 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Aug 17 21:28:03 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:03.405716    1450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-6ln26" (UniqueName: "kubernetes.io/secret/c6e810be-7dae-4a0c-a160-9cf04dae47f9-default-token-6ln26") pod "hello-world-app-5f5d8b66bb-nmfzr" (UID: "c6e810be-7dae-4a0c-a160-9cf04dae47f9")
	Aug 17 21:28:04 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:04.685303    1450 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 390f95cebefee882f6e5255a75eb3da18708902248d325917b00658a93f15012
	Aug 17 21:28:04 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:04.997671    1450 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 390f95cebefee882f6e5255a75eb3da18708902248d325917b00658a93f15012
	Aug 17 21:28:04 ingress-addon-legacy-449686 kubelet[1450]: E0817 21:28:04.998444    1450 remote_runtime.go:295] ContainerStatus "390f95cebefee882f6e5255a75eb3da18708902248d325917b00658a93f15012" from runtime service failed: rpc error: code = NotFound desc = could not find container "390f95cebefee882f6e5255a75eb3da18708902248d325917b00658a93f15012": container with ID starting with 390f95cebefee882f6e5255a75eb3da18708902248d325917b00658a93f15012 not found: ID does not exist
	Aug 17 21:28:05 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:05.816521    1450 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-trm56" (UniqueName: "kubernetes.io/secret/65dbf5f8-5af0-4837-99a9-5c58a14d7d99-minikube-ingress-dns-token-trm56") pod "65dbf5f8-5af0-4837-99a9-5c58a14d7d99" (UID: "65dbf5f8-5af0-4837-99a9-5c58a14d7d99")
	Aug 17 21:28:05 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:05.818811    1450 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65dbf5f8-5af0-4837-99a9-5c58a14d7d99-minikube-ingress-dns-token-trm56" (OuterVolumeSpecName: "minikube-ingress-dns-token-trm56") pod "65dbf5f8-5af0-4837-99a9-5c58a14d7d99" (UID: "65dbf5f8-5af0-4837-99a9-5c58a14d7d99"). InnerVolumeSpecName "minikube-ingress-dns-token-trm56". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:28:05 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:05.916905    1450 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-trm56" (UniqueName: "kubernetes.io/secret/65dbf5f8-5af0-4837-99a9-5c58a14d7d99-minikube-ingress-dns-token-trm56") on node "ingress-addon-legacy-449686" DevicePath ""
	Aug 17 21:28:06 ingress-addon-legacy-449686 kubelet[1450]: E0817 21:28:06.315452    1450 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"390f95cebefee882f6e5255a75eb3da18708902248d325917b00658a93f15012\": container with ID starting with 390f95cebefee882f6e5255a75eb3da18708902248d325917b00658a93f15012 not found: ID does not exist"
	Aug 17 21:28:11 ingress-addon-legacy-449686 kubelet[1450]: E0817 21:28:11.274793    1450 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w7b4x.177c48e104cec6ce", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w7b4x", UID:"b301c03b-5458-448e-aefc-e835ec71037c", APIVersion:"v1", ResourceVersion:"474", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-449686"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12fc21ad01758ce, ext:231615330181, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12fc21ad01758ce, ext:231615330181, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w7b4x.177c48e104cec6ce" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 17 21:28:11 ingress-addon-legacy-449686 kubelet[1450]: E0817 21:28:11.295582    1450 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w7b4x.177c48e104cec6ce", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w7b4x", UID:"b301c03b-5458-448e-aefc-e835ec71037c", APIVersion:"v1", ResourceVersion:"474", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-449686"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12fc21ad01758ce, ext:231615330181, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12fc21ad10517f5, ext:231630911129, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w7b4x.177c48e104cec6ce" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 17 21:28:13 ingress-addon-legacy-449686 kubelet[1450]: W0817 21:28:13.730503    1450 pod_container_deletor.go:77] Container "3a4218832883b541bfcb8e15de3fcb10314eebd3019520d962a566b242af8315" not found in pod's containers
	Aug 17 21:28:15 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:15.450975    1450 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-6ptpd" (UniqueName: "kubernetes.io/secret/b301c03b-5458-448e-aefc-e835ec71037c-ingress-nginx-token-6ptpd") pod "b301c03b-5458-448e-aefc-e835ec71037c" (UID: "b301c03b-5458-448e-aefc-e835ec71037c")
	Aug 17 21:28:15 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:15.451049    1450 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b301c03b-5458-448e-aefc-e835ec71037c-webhook-cert") pod "b301c03b-5458-448e-aefc-e835ec71037c" (UID: "b301c03b-5458-448e-aefc-e835ec71037c")
	Aug 17 21:28:15 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:15.469918    1450 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b301c03b-5458-448e-aefc-e835ec71037c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b301c03b-5458-448e-aefc-e835ec71037c" (UID: "b301c03b-5458-448e-aefc-e835ec71037c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:28:15 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:15.470007    1450 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b301c03b-5458-448e-aefc-e835ec71037c-ingress-nginx-token-6ptpd" (OuterVolumeSpecName: "ingress-nginx-token-6ptpd") pod "b301c03b-5458-448e-aefc-e835ec71037c" (UID: "b301c03b-5458-448e-aefc-e835ec71037c"). InnerVolumeSpecName "ingress-nginx-token-6ptpd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:28:15 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:15.552078    1450 reconciler.go:319] Volume detached for volume "ingress-nginx-token-6ptpd" (UniqueName: "kubernetes.io/secret/b301c03b-5458-448e-aefc-e835ec71037c-ingress-nginx-token-6ptpd") on node "ingress-addon-legacy-449686" DevicePath ""
	Aug 17 21:28:15 ingress-addon-legacy-449686 kubelet[1450]: I0817 21:28:15.552201    1450 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b301c03b-5458-448e-aefc-e835ec71037c-webhook-cert") on node "ingress-addon-legacy-449686" DevicePath ""
	Aug 17 21:28:16 ingress-addon-legacy-449686 kubelet[1450]: W0817 21:28:16.315516    1450 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/b301c03b-5458-448e-aefc-e835ec71037c/volumes" does not exist
	
	* 
	* ==> storage-provisioner [4df6930cdfda7e6fa8e74e29640ecf736a6f183c1f3bfa57315459517365e461] <==
	* I0817 21:24:37.589605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0817 21:25:07.592022       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [aab563ec5373bdc8ac71e57c7169987d036191f2a0ac582230008a6a408b6130] <==
	* I0817 21:25:08.640423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:25:08.655746       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:25:08.655842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 21:25:08.668466       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 21:25:08.668510       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34b01d67-eab7-451f-9f54-699907cda429", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-449686_5a13b855-0c8c-4442-937a-ca20d9b2f6f5 became leader
	I0817 21:25:08.668864       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-449686_5a13b855-0c8c-4442-937a-ca20d9b2f6f5!
	I0817 21:25:08.769097       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-449686_5a13b855-0c8c-4442-937a-ca20d9b2f6f5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-449686 -n ingress-addon-legacy-449686
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-449686 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (168.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-65x2b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-65x2b -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-65x2b -- sh -c "ping -c 1 192.168.39.1": exit status 1 (183.118656ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-67b7f59bb-65x2b): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-9c77m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-9c77m -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-9c77m -- sh -c "ping -c 1 192.168.39.1": exit status 1 (185.585889ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-67b7f59bb-9c77m): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-959371 -n multinode-959371
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-959371 logs -n 25: (1.298672653s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-043589 ssh -- ls                    | mount-start-2-043589 | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC | 17 Aug 23 21:32 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-043589 ssh --                       | mount-start-2-043589 | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC | 17 Aug 23 21:32 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-043589                           | mount-start-2-043589 | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC | 17 Aug 23 21:32 UTC |
	| start   | -p mount-start-2-043589                           | mount-start-2-043589 | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC | 17 Aug 23 21:32 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-043589 | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC |                     |
	|         | --profile mount-start-2-043589                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-043589 ssh -- ls                    | mount-start-2-043589 | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC | 17 Aug 23 21:32 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-043589 ssh --                       | mount-start-2-043589 | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC | 17 Aug 23 21:32 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-043589                           | mount-start-2-043589 | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC | 17 Aug 23 21:32 UTC |
	| delete  | -p mount-start-1-022661                           | mount-start-1-022661 | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC | 17 Aug 23 21:32 UTC |
	| start   | -p multinode-959371                               | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:32 UTC | 17 Aug 23 21:34 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- apply -f                   | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- rollout                    | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- get pods -o                | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- get pods -o                | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | busybox-67b7f59bb-65x2b --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | busybox-67b7f59bb-9c77m --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | busybox-67b7f59bb-65x2b --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | busybox-67b7f59bb-9c77m --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | busybox-67b7f59bb-65x2b -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | busybox-67b7f59bb-9c77m -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- get pods -o                | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | busybox-67b7f59bb-65x2b                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC |                     |
	|         | busybox-67b7f59bb-65x2b -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC | 17 Aug 23 21:34 UTC |
	|         | busybox-67b7f59bb-9c77m                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-959371 -- exec                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:34 UTC |                     |
	|         | busybox-67b7f59bb-9c77m -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:32:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:32:40.245063  223217 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:32:40.245191  223217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:32:40.245200  223217 out.go:309] Setting ErrFile to fd 2...
	I0817 21:32:40.245205  223217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:32:40.245421  223217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 21:32:40.246048  223217 out.go:303] Setting JSON to false
	I0817 21:32:40.247047  223217 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":22485,"bootTime":1692285475,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:32:40.247117  223217 start.go:138] virtualization: kvm guest
	I0817 21:32:40.249791  223217 out.go:177] * [multinode-959371] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:32:40.251758  223217 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:32:40.251756  223217 notify.go:220] Checking for updates...
	I0817 21:32:40.253840  223217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:32:40.255793  223217 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:32:40.257692  223217 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:32:40.259532  223217 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:32:40.261230  223217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:32:40.263181  223217 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:32:40.299736  223217 out.go:177] * Using the kvm2 driver based on user configuration
	I0817 21:32:40.301326  223217 start.go:298] selected driver: kvm2
	I0817 21:32:40.301344  223217 start.go:902] validating driver "kvm2" against <nil>
	I0817 21:32:40.301356  223217 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:32:40.302042  223217 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:32:40.302154  223217 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 21:32:40.317271  223217 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 21:32:40.317332  223217 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:32:40.317566  223217 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:32:40.317608  223217 cni.go:84] Creating CNI manager for ""
	I0817 21:32:40.317619  223217 cni.go:136] 0 nodes found, recommending kindnet
	I0817 21:32:40.317627  223217 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 21:32:40.317646  223217 start_flags.go:319] config:
	{Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugi
n:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:32:40.317807  223217 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:32:40.320137  223217 out.go:177] * Starting control plane node multinode-959371 in cluster multinode-959371
	I0817 21:32:40.321704  223217 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:32:40.321748  223217 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 21:32:40.321760  223217 cache.go:57] Caching tarball of preloaded images
	I0817 21:32:40.321886  223217 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 21:32:40.321904  223217 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:32:40.322283  223217 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:32:40.322311  223217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json: {Name:mke8b028d3c7043e9c9949234a99929650ecadc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:32:40.322473  223217 start.go:365] acquiring machines lock for multinode-959371: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 21:32:40.322510  223217 start.go:369] acquired machines lock for "multinode-959371" in 19.698µs
	I0817 21:32:40.322534  223217 start.go:93] Provisioning new machine with config: &{Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterNam
e:multinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:32:40.322614  223217 start.go:125] createHost starting for "" (driver="kvm2")
	I0817 21:32:40.324551  223217 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0817 21:32:40.324683  223217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:32:40.324738  223217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:32:40.339585  223217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40101
	I0817 21:32:40.340064  223217 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:32:40.340656  223217 main.go:141] libmachine: Using API Version  1
	I0817 21:32:40.340677  223217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:32:40.341102  223217 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:32:40.341407  223217 main.go:141] libmachine: (multinode-959371) Calling .GetMachineName
	I0817 21:32:40.341582  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:32:40.341764  223217 start.go:159] libmachine.API.Create for "multinode-959371" (driver="kvm2")
	I0817 21:32:40.341800  223217 client.go:168] LocalClient.Create starting
	I0817 21:32:40.341837  223217 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem
	I0817 21:32:40.341940  223217 main.go:141] libmachine: Decoding PEM data...
	I0817 21:32:40.341970  223217 main.go:141] libmachine: Parsing certificate...
	I0817 21:32:40.342069  223217 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem
	I0817 21:32:40.342100  223217 main.go:141] libmachine: Decoding PEM data...
	I0817 21:32:40.342119  223217 main.go:141] libmachine: Parsing certificate...
	I0817 21:32:40.342140  223217 main.go:141] libmachine: Running pre-create checks...
	I0817 21:32:40.342156  223217 main.go:141] libmachine: (multinode-959371) Calling .PreCreateCheck
	I0817 21:32:40.342520  223217 main.go:141] libmachine: (multinode-959371) Calling .GetConfigRaw
	I0817 21:32:40.342924  223217 main.go:141] libmachine: Creating machine...
	I0817 21:32:40.342940  223217 main.go:141] libmachine: (multinode-959371) Calling .Create
	I0817 21:32:40.343109  223217 main.go:141] libmachine: (multinode-959371) Creating KVM machine...
	I0817 21:32:40.344313  223217 main.go:141] libmachine: (multinode-959371) DBG | found existing default KVM network
	I0817 21:32:40.345036  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:40.344883  223240 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298b0}
	I0817 21:32:40.351071  223217 main.go:141] libmachine: (multinode-959371) DBG | trying to create private KVM network mk-multinode-959371 192.168.39.0/24...
	I0817 21:32:40.430684  223217 main.go:141] libmachine: (multinode-959371) DBG | private KVM network mk-multinode-959371 192.168.39.0/24 created
	I0817 21:32:40.430725  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:40.430658  223240 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:32:40.430741  223217 main.go:141] libmachine: (multinode-959371) Setting up store path in /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371 ...
	I0817 21:32:40.430771  223217 main.go:141] libmachine: (multinode-959371) Building disk image from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0817 21:32:40.430852  223217 main.go:141] libmachine: (multinode-959371) Downloading /home/jenkins/minikube-integration/16865-203458/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0817 21:32:40.657853  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:40.657693  223240 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa...
	I0817 21:32:40.808962  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:40.808810  223240 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/multinode-959371.rawdisk...
	I0817 21:32:40.809015  223217 main.go:141] libmachine: (multinode-959371) DBG | Writing magic tar header
	I0817 21:32:40.809029  223217 main.go:141] libmachine: (multinode-959371) DBG | Writing SSH key tar header
	I0817 21:32:40.809048  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:40.808934  223240 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371 ...
	I0817 21:32:40.809064  223217 main.go:141] libmachine: (multinode-959371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371
	I0817 21:32:40.809081  223217 main.go:141] libmachine: (multinode-959371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines
	I0817 21:32:40.809098  223217 main.go:141] libmachine: (multinode-959371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:32:40.809116  223217 main.go:141] libmachine: (multinode-959371) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371 (perms=drwx------)
	I0817 21:32:40.809132  223217 main.go:141] libmachine: (multinode-959371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458
	I0817 21:32:40.809145  223217 main.go:141] libmachine: (multinode-959371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0817 21:32:40.809154  223217 main.go:141] libmachine: (multinode-959371) DBG | Checking permissions on dir: /home/jenkins
	I0817 21:32:40.809164  223217 main.go:141] libmachine: (multinode-959371) DBG | Checking permissions on dir: /home
	I0817 21:32:40.809169  223217 main.go:141] libmachine: (multinode-959371) DBG | Skipping /home - not owner
	I0817 21:32:40.809187  223217 main.go:141] libmachine: (multinode-959371) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines (perms=drwxr-xr-x)
	I0817 21:32:40.809200  223217 main.go:141] libmachine: (multinode-959371) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube (perms=drwxr-xr-x)
	I0817 21:32:40.809216  223217 main.go:141] libmachine: (multinode-959371) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458 (perms=drwxrwxr-x)
	I0817 21:32:40.809227  223217 main.go:141] libmachine: (multinode-959371) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0817 21:32:40.809235  223217 main.go:141] libmachine: (multinode-959371) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0817 21:32:40.809242  223217 main.go:141] libmachine: (multinode-959371) Creating domain...
	I0817 21:32:40.810359  223217 main.go:141] libmachine: (multinode-959371) define libvirt domain using xml: 
	I0817 21:32:40.810393  223217 main.go:141] libmachine: (multinode-959371) <domain type='kvm'>
	I0817 21:32:40.810406  223217 main.go:141] libmachine: (multinode-959371)   <name>multinode-959371</name>
	I0817 21:32:40.810415  223217 main.go:141] libmachine: (multinode-959371)   <memory unit='MiB'>2200</memory>
	I0817 21:32:40.810425  223217 main.go:141] libmachine: (multinode-959371)   <vcpu>2</vcpu>
	I0817 21:32:40.810441  223217 main.go:141] libmachine: (multinode-959371)   <features>
	I0817 21:32:40.810450  223217 main.go:141] libmachine: (multinode-959371)     <acpi/>
	I0817 21:32:40.810474  223217 main.go:141] libmachine: (multinode-959371)     <apic/>
	I0817 21:32:40.810517  223217 main.go:141] libmachine: (multinode-959371)     <pae/>
	I0817 21:32:40.810542  223217 main.go:141] libmachine: (multinode-959371)     
	I0817 21:32:40.810560  223217 main.go:141] libmachine: (multinode-959371)   </features>
	I0817 21:32:40.810574  223217 main.go:141] libmachine: (multinode-959371)   <cpu mode='host-passthrough'>
	I0817 21:32:40.810586  223217 main.go:141] libmachine: (multinode-959371)   
	I0817 21:32:40.810594  223217 main.go:141] libmachine: (multinode-959371)   </cpu>
	I0817 21:32:40.810600  223217 main.go:141] libmachine: (multinode-959371)   <os>
	I0817 21:32:40.810616  223217 main.go:141] libmachine: (multinode-959371)     <type>hvm</type>
	I0817 21:32:40.810625  223217 main.go:141] libmachine: (multinode-959371)     <boot dev='cdrom'/>
	I0817 21:32:40.810640  223217 main.go:141] libmachine: (multinode-959371)     <boot dev='hd'/>
	I0817 21:32:40.810654  223217 main.go:141] libmachine: (multinode-959371)     <bootmenu enable='no'/>
	I0817 21:32:40.810674  223217 main.go:141] libmachine: (multinode-959371)   </os>
	I0817 21:32:40.810746  223217 main.go:141] libmachine: (multinode-959371)   <devices>
	I0817 21:32:40.810783  223217 main.go:141] libmachine: (multinode-959371)     <disk type='file' device='cdrom'>
	I0817 21:32:40.810808  223217 main.go:141] libmachine: (multinode-959371)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/boot2docker.iso'/>
	I0817 21:32:40.810821  223217 main.go:141] libmachine: (multinode-959371)       <target dev='hdc' bus='scsi'/>
	I0817 21:32:40.810831  223217 main.go:141] libmachine: (multinode-959371)       <readonly/>
	I0817 21:32:40.810842  223217 main.go:141] libmachine: (multinode-959371)     </disk>
	I0817 21:32:40.810865  223217 main.go:141] libmachine: (multinode-959371)     <disk type='file' device='disk'>
	I0817 21:32:40.810885  223217 main.go:141] libmachine: (multinode-959371)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0817 21:32:40.810906  223217 main.go:141] libmachine: (multinode-959371)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/multinode-959371.rawdisk'/>
	I0817 21:32:40.810916  223217 main.go:141] libmachine: (multinode-959371)       <target dev='hda' bus='virtio'/>
	I0817 21:32:40.810928  223217 main.go:141] libmachine: (multinode-959371)     </disk>
	I0817 21:32:40.810940  223217 main.go:141] libmachine: (multinode-959371)     <interface type='network'>
	I0817 21:32:40.810956  223217 main.go:141] libmachine: (multinode-959371)       <source network='mk-multinode-959371'/>
	I0817 21:32:40.810973  223217 main.go:141] libmachine: (multinode-959371)       <model type='virtio'/>
	I0817 21:32:40.810986  223217 main.go:141] libmachine: (multinode-959371)     </interface>
	I0817 21:32:40.810997  223217 main.go:141] libmachine: (multinode-959371)     <interface type='network'>
	I0817 21:32:40.811007  223217 main.go:141] libmachine: (multinode-959371)       <source network='default'/>
	I0817 21:32:40.811019  223217 main.go:141] libmachine: (multinode-959371)       <model type='virtio'/>
	I0817 21:32:40.811037  223217 main.go:141] libmachine: (multinode-959371)     </interface>
	I0817 21:32:40.811054  223217 main.go:141] libmachine: (multinode-959371)     <serial type='pty'>
	I0817 21:32:40.811085  223217 main.go:141] libmachine: (multinode-959371)       <target port='0'/>
	I0817 21:32:40.811110  223217 main.go:141] libmachine: (multinode-959371)     </serial>
	I0817 21:32:40.811136  223217 main.go:141] libmachine: (multinode-959371)     <console type='pty'>
	I0817 21:32:40.811158  223217 main.go:141] libmachine: (multinode-959371)       <target type='serial' port='0'/>
	I0817 21:32:40.811181  223217 main.go:141] libmachine: (multinode-959371)     </console>
	I0817 21:32:40.811193  223217 main.go:141] libmachine: (multinode-959371)     <rng model='virtio'>
	I0817 21:32:40.811209  223217 main.go:141] libmachine: (multinode-959371)       <backend model='random'>/dev/random</backend>
	I0817 21:32:40.811219  223217 main.go:141] libmachine: (multinode-959371)     </rng>
	I0817 21:32:40.811234  223217 main.go:141] libmachine: (multinode-959371)     
	I0817 21:32:40.811254  223217 main.go:141] libmachine: (multinode-959371)     
	I0817 21:32:40.811264  223217 main.go:141] libmachine: (multinode-959371)   </devices>
	I0817 21:32:40.811275  223217 main.go:141] libmachine: (multinode-959371) </domain>
	I0817 21:32:40.811293  223217 main.go:141] libmachine: (multinode-959371) 
	I0817 21:32:40.815794  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:bb:c5:e3 in network default
	I0817 21:32:40.816431  223217 main.go:141] libmachine: (multinode-959371) Ensuring networks are active...
	I0817 21:32:40.816454  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:40.817187  223217 main.go:141] libmachine: (multinode-959371) Ensuring network default is active
	I0817 21:32:40.817498  223217 main.go:141] libmachine: (multinode-959371) Ensuring network mk-multinode-959371 is active
	I0817 21:32:40.818164  223217 main.go:141] libmachine: (multinode-959371) Getting domain xml...
	I0817 21:32:40.819034  223217 main.go:141] libmachine: (multinode-959371) Creating domain...
	I0817 21:32:42.063697  223217 main.go:141] libmachine: (multinode-959371) Waiting to get IP...
	I0817 21:32:42.064759  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:42.065165  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:42.065209  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:42.065152  223240 retry.go:31] will retry after 247.434277ms: waiting for machine to come up
	I0817 21:32:42.315132  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:42.315771  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:42.315800  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:42.315685  223240 retry.go:31] will retry after 282.973988ms: waiting for machine to come up
	I0817 21:32:42.600409  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:42.600998  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:42.601032  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:42.600920  223240 retry.go:31] will retry after 368.726843ms: waiting for machine to come up
	I0817 21:32:42.972870  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:42.973386  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:42.973412  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:42.973332  223240 retry.go:31] will retry after 508.242169ms: waiting for machine to come up
	I0817 21:32:43.483274  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:43.483792  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:43.483824  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:43.483731  223240 retry.go:31] will retry after 508.503827ms: waiting for machine to come up
	I0817 21:32:43.993412  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:43.993871  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:43.993897  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:43.993820  223240 retry.go:31] will retry after 758.360689ms: waiting for machine to come up
	I0817 21:32:44.753461  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:44.753880  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:44.753913  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:44.753835  223240 retry.go:31] will retry after 792.002134ms: waiting for machine to come up
	I0817 21:32:45.547495  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:45.547892  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:45.547927  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:45.547828  223240 retry.go:31] will retry after 1.199694064s: waiting for machine to come up
	I0817 21:32:46.749383  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:46.749788  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:46.749812  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:46.749750  223240 retry.go:31] will retry after 1.696572407s: waiting for machine to come up
	I0817 21:32:48.448689  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:48.449189  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:48.449235  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:48.449133  223240 retry.go:31] will retry after 2.018559952s: waiting for machine to come up
	I0817 21:32:50.469478  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:50.469944  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:50.470000  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:50.469946  223240 retry.go:31] will retry after 1.897635502s: waiting for machine to come up
	I0817 21:32:52.370217  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:52.370687  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:52.370758  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:52.370686  223240 retry.go:31] will retry after 2.646544766s: waiting for machine to come up
	I0817 21:32:55.020625  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:55.021264  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:55.021290  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:55.021217  223240 retry.go:31] will retry after 3.521820247s: waiting for machine to come up
	I0817 21:32:58.546652  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:32:58.547004  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:32:58.547038  223217 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:32:58.546949  223240 retry.go:31] will retry after 5.348180427s: waiting for machine to come up
	I0817 21:33:03.899329  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:03.899849  223217 main.go:141] libmachine: (multinode-959371) Found IP for machine: 192.168.39.104
	I0817 21:33:03.899866  223217 main.go:141] libmachine: (multinode-959371) Reserving static IP address...
	I0817 21:33:03.899878  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has current primary IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:03.900265  223217 main.go:141] libmachine: (multinode-959371) DBG | unable to find host DHCP lease matching {name: "multinode-959371", mac: "52:54:00:b5:61:ee", ip: "192.168.39.104"} in network mk-multinode-959371
	I0817 21:33:03.977857  223217 main.go:141] libmachine: (multinode-959371) DBG | Getting to WaitForSSH function...
	I0817 21:33:03.977893  223217 main.go:141] libmachine: (multinode-959371) Reserved static IP address: 192.168.39.104
	I0817 21:33:03.977931  223217 main.go:141] libmachine: (multinode-959371) Waiting for SSH to be available...
	I0817 21:33:03.980960  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:03.981343  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:03.981383  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:03.981525  223217 main.go:141] libmachine: (multinode-959371) DBG | Using SSH client type: external
	I0817 21:33:03.981566  223217 main.go:141] libmachine: (multinode-959371) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa (-rw-------)
	I0817 21:33:03.981604  223217 main.go:141] libmachine: (multinode-959371) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 21:33:03.981625  223217 main.go:141] libmachine: (multinode-959371) DBG | About to run SSH command:
	I0817 21:33:03.981640  223217 main.go:141] libmachine: (multinode-959371) DBG | exit 0
	I0817 21:33:04.077977  223217 main.go:141] libmachine: (multinode-959371) DBG | SSH cmd err, output: <nil>: 
	I0817 21:33:04.078306  223217 main.go:141] libmachine: (multinode-959371) KVM machine creation complete!
	I0817 21:33:04.078670  223217 main.go:141] libmachine: (multinode-959371) Calling .GetConfigRaw
	I0817 21:33:04.079213  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:33:04.079410  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:33:04.079562  223217 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0817 21:33:04.079576  223217 main.go:141] libmachine: (multinode-959371) Calling .GetState
	I0817 21:33:04.081051  223217 main.go:141] libmachine: Detecting operating system of created instance...
	I0817 21:33:04.081070  223217 main.go:141] libmachine: Waiting for SSH to be available...
	I0817 21:33:04.081080  223217 main.go:141] libmachine: Getting to WaitForSSH function...
	I0817 21:33:04.081091  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:04.083601  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.083969  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:04.084016  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.084119  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:04.084303  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.084475  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.084627  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:04.084789  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:33:04.085256  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:33:04.085273  223217 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0817 21:33:04.209690  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:33:04.209717  223217 main.go:141] libmachine: Detecting the provisioner...
	I0817 21:33:04.209725  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:04.213093  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.213503  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:04.213537  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.213679  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:04.213934  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.214144  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.214324  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:04.214516  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:33:04.214948  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:33:04.214963  223217 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0817 21:33:04.343164  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0817 21:33:04.343264  223217 main.go:141] libmachine: found compatible host: buildroot
	I0817 21:33:04.343276  223217 main.go:141] libmachine: Provisioning with buildroot...
	I0817 21:33:04.343285  223217 main.go:141] libmachine: (multinode-959371) Calling .GetMachineName
	I0817 21:33:04.343575  223217 buildroot.go:166] provisioning hostname "multinode-959371"
	I0817 21:33:04.343609  223217 main.go:141] libmachine: (multinode-959371) Calling .GetMachineName
	I0817 21:33:04.343788  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:04.346574  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.347014  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:04.347042  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.347230  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:04.347533  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.347769  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.347948  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:04.348109  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:33:04.348563  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:33:04.348579  223217 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-959371 && echo "multinode-959371" | sudo tee /etc/hostname
	I0817 21:33:04.491530  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-959371
	
	I0817 21:33:04.491570  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:04.494960  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.495364  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:04.495397  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.495551  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:04.495766  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.495967  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.496097  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:04.496312  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:33:04.496738  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:33:04.496757  223217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-959371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-959371/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-959371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:33:04.631507  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:33:04.631561  223217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 21:33:04.631601  223217 buildroot.go:174] setting up certificates
	I0817 21:33:04.631614  223217 provision.go:83] configureAuth start
	I0817 21:33:04.631637  223217 main.go:141] libmachine: (multinode-959371) Calling .GetMachineName
	I0817 21:33:04.632075  223217 main.go:141] libmachine: (multinode-959371) Calling .GetIP
	I0817 21:33:04.634943  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.635297  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:04.635342  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.635494  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:04.637721  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.638111  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:04.638155  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.638338  223217 provision.go:138] copyHostCerts
	I0817 21:33:04.638369  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:33:04.638401  223217 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 21:33:04.638410  223217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:33:04.638468  223217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 21:33:04.638554  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:33:04.638570  223217 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 21:33:04.638577  223217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:33:04.638595  223217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 21:33:04.638648  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:33:04.638666  223217 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 21:33:04.638672  223217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:33:04.638690  223217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 21:33:04.638748  223217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.multinode-959371 san=[192.168.39.104 192.168.39.104 localhost 127.0.0.1 minikube multinode-959371]
	I0817 21:33:04.784976  223217 provision.go:172] copyRemoteCerts
	I0817 21:33:04.785052  223217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:33:04.785084  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:04.788092  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.788507  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:04.788555  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.788717  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:04.788938  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.789110  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:04.789248  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:33:04.883791  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:33:04.883893  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 21:33:04.912049  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:33:04.912124  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0817 21:33:04.937157  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:33:04.937236  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:33:04.961713  223217 provision.go:86] duration metric: configureAuth took 330.076688ms
	I0817 21:33:04.961757  223217 buildroot.go:189] setting minikube options for container-runtime
	I0817 21:33:04.962004  223217 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:33:04.962137  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:04.965038  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.965382  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:04.965417  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:04.965629  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:04.965860  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.966069  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:04.966225  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:04.966425  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:33:04.966820  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:33:04.966837  223217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:33:05.306149  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:33:05.306182  223217 main.go:141] libmachine: Checking connection to Docker...
	I0817 21:33:05.306192  223217 main.go:141] libmachine: (multinode-959371) Calling .GetURL
	I0817 21:33:05.307911  223217 main.go:141] libmachine: (multinode-959371) DBG | Using libvirt version 6000000
	I0817 21:33:05.310153  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.310568  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:05.310602  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.310839  223217 main.go:141] libmachine: Docker is up and running!
	I0817 21:33:05.310855  223217 main.go:141] libmachine: Reticulating splines...
	I0817 21:33:05.310862  223217 client.go:171] LocalClient.Create took 24.969054552s
	I0817 21:33:05.310892  223217 start.go:167] duration metric: libmachine.API.Create for "multinode-959371" took 24.969129436s
	I0817 21:33:05.310906  223217 start.go:300] post-start starting for "multinode-959371" (driver="kvm2")
	I0817 21:33:05.310918  223217 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:33:05.310948  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:33:05.311204  223217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:33:05.311234  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:05.313524  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.313894  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:05.313930  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.314097  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:05.314307  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:05.314466  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:05.314607  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:33:05.409188  223217 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:33:05.413720  223217 command_runner.go:130] > NAME=Buildroot
	I0817 21:33:05.413742  223217 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0817 21:33:05.413746  223217 command_runner.go:130] > ID=buildroot
	I0817 21:33:05.413751  223217 command_runner.go:130] > VERSION_ID=2021.02.12
	I0817 21:33:05.413756  223217 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0817 21:33:05.413945  223217 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 21:33:05.413972  223217 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 21:33:05.414074  223217 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 21:33:05.414157  223217 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 21:33:05.414168  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /etc/ssl/certs/2106702.pem
	I0817 21:33:05.414262  223217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:33:05.424310  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:33:05.447905  223217 start.go:303] post-start completed in 136.982515ms
	I0817 21:33:05.447974  223217 main.go:141] libmachine: (multinode-959371) Calling .GetConfigRaw
	I0817 21:33:05.448737  223217 main.go:141] libmachine: (multinode-959371) Calling .GetIP
	I0817 21:33:05.451826  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.452226  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:05.452261  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.452683  223217 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:33:05.452917  223217 start.go:128] duration metric: createHost completed in 25.130291307s
	I0817 21:33:05.452964  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:05.455301  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.455585  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:05.455612  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.455784  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:05.455974  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:05.456138  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:05.456245  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:05.456418  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:33:05.457028  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:33:05.457049  223217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 21:33:05.583253  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692307985.563747276
	
	I0817 21:33:05.583278  223217 fix.go:206] guest clock: 1692307985.563747276
	I0817 21:33:05.583287  223217 fix.go:219] Guest: 2023-08-17 21:33:05.563747276 +0000 UTC Remote: 2023-08-17 21:33:05.452931312 +0000 UTC m=+25.244743887 (delta=110.815964ms)
	I0817 21:33:05.583311  223217 fix.go:190] guest clock delta is within tolerance: 110.815964ms
	I0817 21:33:05.583318  223217 start.go:83] releasing machines lock for "multinode-959371", held for 25.260796325s
	I0817 21:33:05.583350  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:33:05.583678  223217 main.go:141] libmachine: (multinode-959371) Calling .GetIP
	I0817 21:33:05.586480  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.586958  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:05.586985  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.587170  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:33:05.587852  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:33:05.588062  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:33:05.588169  223217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:33:05.588238  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:05.588267  223217 ssh_runner.go:195] Run: cat /version.json
	I0817 21:33:05.588287  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:05.590949  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.591018  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.591419  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:05.591453  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.591481  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:05.591498  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:05.591597  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:05.591790  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:05.591821  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:05.591957  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:05.591972  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:05.592139  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:05.592184  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:33:05.592250  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:33:05.716927  223217 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0817 21:33:05.717737  223217 command_runner.go:130] > {"iso_version": "v1.31.0", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "be0194f682c2c37366eacb8c13503cb6c7a41cf8"}
	I0817 21:33:05.717914  223217 ssh_runner.go:195] Run: systemctl --version
	I0817 21:33:05.724069  223217 command_runner.go:130] > systemd 247 (247)
	I0817 21:33:05.724106  223217 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0817 21:33:05.724307  223217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:33:05.892495  223217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:33:05.899270  223217 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0817 21:33:05.899398  223217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 21:33:05.899493  223217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:33:05.915736  223217 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0817 21:33:05.916327  223217 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 21:33:05.916351  223217 start.go:466] detecting cgroup driver to use...
	I0817 21:33:05.916530  223217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:33:05.934663  223217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:33:05.948164  223217 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:33:05.948227  223217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:33:05.962314  223217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:33:05.976714  223217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:33:06.098904  223217 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0817 21:33:06.099003  223217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:33:06.113834  223217 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0817 21:33:06.216079  223217 docker.go:212] disabling docker service ...
	I0817 21:33:06.216150  223217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:33:06.229447  223217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:33:06.240456  223217 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0817 21:33:06.240589  223217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:33:06.254920  223217 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0817 21:33:06.349810  223217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:33:06.362497  223217 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0817 21:33:06.363077  223217 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0817 21:33:06.458919  223217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:33:06.471557  223217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:33:06.489086  223217 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0817 21:33:06.489147  223217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 21:33:06.489212  223217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:33:06.498637  223217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:33:06.498744  223217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:33:06.508286  223217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:33:06.517869  223217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:33:06.527145  223217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:33:06.536613  223217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:33:06.544422  223217 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 21:33:06.544487  223217 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 21:33:06.544541  223217 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 21:33:06.557335  223217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:33:06.565720  223217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:33:06.673162  223217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:33:06.848399  223217 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:33:06.848479  223217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:33:06.853156  223217 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0817 21:33:06.853186  223217 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0817 21:33:06.853196  223217 command_runner.go:130] > Device: 16h/22d	Inode: 828         Links: 1
	I0817 21:33:06.853207  223217 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:33:06.853215  223217 command_runner.go:130] > Access: 2023-08-17 21:33:06.818378925 +0000
	I0817 21:33:06.853224  223217 command_runner.go:130] > Modify: 2023-08-17 21:33:06.818378925 +0000
	I0817 21:33:06.853231  223217 command_runner.go:130] > Change: 2023-08-17 21:33:06.818378925 +0000
	I0817 21:33:06.853236  223217 command_runner.go:130] >  Birth: -
	I0817 21:33:06.853257  223217 start.go:534] Will wait 60s for crictl version
	I0817 21:33:06.853313  223217 ssh_runner.go:195] Run: which crictl
	I0817 21:33:06.857756  223217 command_runner.go:130] > /usr/bin/crictl
	I0817 21:33:06.858539  223217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:33:06.894560  223217 command_runner.go:130] > Version:  0.1.0
	I0817 21:33:06.894584  223217 command_runner.go:130] > RuntimeName:  cri-o
	I0817 21:33:06.894590  223217 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0817 21:33:06.894595  223217 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0817 21:33:06.894614  223217 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 21:33:06.894701  223217 ssh_runner.go:195] Run: crio --version
	I0817 21:33:06.946755  223217 command_runner.go:130] > crio version 1.24.1
	I0817 21:33:06.946786  223217 command_runner.go:130] > Version:          1.24.1
	I0817 21:33:06.946797  223217 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:33:06.946804  223217 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:33:06.946813  223217 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:33:06.946820  223217 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:33:06.946826  223217 command_runner.go:130] > Compiler:         gc
	I0817 21:33:06.946832  223217 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:33:06.946849  223217 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:33:06.946863  223217 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:33:06.946870  223217 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:33:06.946877  223217 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:33:06.947060  223217 ssh_runner.go:195] Run: crio --version
	I0817 21:33:07.001360  223217 command_runner.go:130] > crio version 1.24.1
	I0817 21:33:07.001388  223217 command_runner.go:130] > Version:          1.24.1
	I0817 21:33:07.001397  223217 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:33:07.001401  223217 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:33:07.001407  223217 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:33:07.001412  223217 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:33:07.001416  223217 command_runner.go:130] > Compiler:         gc
	I0817 21:33:07.001422  223217 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:33:07.001427  223217 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:33:07.001434  223217 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:33:07.001438  223217 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:33:07.001442  223217 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:33:07.004287  223217 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 21:33:07.006226  223217 main.go:141] libmachine: (multinode-959371) Calling .GetIP
	I0817 21:33:07.009021  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:07.009544  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:07.009581  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:07.009809  223217 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 21:33:07.014191  223217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:33:07.026008  223217 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:33:07.026078  223217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:33:07.053193  223217 command_runner.go:130] > {
	I0817 21:33:07.053223  223217 command_runner.go:130] >   "images": [
	I0817 21:33:07.053227  223217 command_runner.go:130] >   ]
	I0817 21:33:07.053230  223217 command_runner.go:130] > }
	I0817 21:33:07.054271  223217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 21:33:07.054343  223217 ssh_runner.go:195] Run: which lz4
	I0817 21:33:07.058191  223217 command_runner.go:130] > /usr/bin/lz4
	I0817 21:33:07.058311  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0817 21:33:07.058398  223217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 21:33:07.062466  223217 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 21:33:07.062743  223217 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 21:33:07.062787  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 21:33:08.863389  223217 crio.go:444] Took 1.805014 seconds to copy over tarball
	I0817 21:33:08.863468  223217 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 21:33:11.561628  223217 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.698132025s)
	I0817 21:33:11.561674  223217 crio.go:451] Took 2.698255 seconds to extract the tarball
	I0817 21:33:11.561687  223217 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 21:33:11.604120  223217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:33:11.662528  223217 command_runner.go:130] > {
	I0817 21:33:11.662552  223217 command_runner.go:130] >   "images": [
	I0817 21:33:11.662556  223217 command_runner.go:130] >     {
	I0817 21:33:11.662564  223217 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0817 21:33:11.662570  223217 command_runner.go:130] >       "repoTags": [
	I0817 21:33:11.662576  223217 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0817 21:33:11.662580  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662584  223217 command_runner.go:130] >       "repoDigests": [
	I0817 21:33:11.662605  223217 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0817 21:33:11.662611  223217 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0817 21:33:11.662615  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662620  223217 command_runner.go:130] >       "size": "65249302",
	I0817 21:33:11.662624  223217 command_runner.go:130] >       "uid": null,
	I0817 21:33:11.662628  223217 command_runner.go:130] >       "username": "",
	I0817 21:33:11.662635  223217 command_runner.go:130] >       "spec": null
	I0817 21:33:11.662640  223217 command_runner.go:130] >     },
	I0817 21:33:11.662643  223217 command_runner.go:130] >     {
	I0817 21:33:11.662649  223217 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0817 21:33:11.662654  223217 command_runner.go:130] >       "repoTags": [
	I0817 21:33:11.662659  223217 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0817 21:33:11.662665  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662669  223217 command_runner.go:130] >       "repoDigests": [
	I0817 21:33:11.662677  223217 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0817 21:33:11.662686  223217 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0817 21:33:11.662690  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662694  223217 command_runner.go:130] >       "size": "31470524",
	I0817 21:33:11.662701  223217 command_runner.go:130] >       "uid": null,
	I0817 21:33:11.662709  223217 command_runner.go:130] >       "username": "",
	I0817 21:33:11.662714  223217 command_runner.go:130] >       "spec": null
	I0817 21:33:11.662717  223217 command_runner.go:130] >     },
	I0817 21:33:11.662721  223217 command_runner.go:130] >     {
	I0817 21:33:11.662727  223217 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0817 21:33:11.662731  223217 command_runner.go:130] >       "repoTags": [
	I0817 21:33:11.662736  223217 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0817 21:33:11.662740  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662743  223217 command_runner.go:130] >       "repoDigests": [
	I0817 21:33:11.662751  223217 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0817 21:33:11.662758  223217 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0817 21:33:11.662765  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662768  223217 command_runner.go:130] >       "size": "53621675",
	I0817 21:33:11.662772  223217 command_runner.go:130] >       "uid": null,
	I0817 21:33:11.662785  223217 command_runner.go:130] >       "username": "",
	I0817 21:33:11.662791  223217 command_runner.go:130] >       "spec": null
	I0817 21:33:11.662795  223217 command_runner.go:130] >     },
	I0817 21:33:11.662802  223217 command_runner.go:130] >     {
	I0817 21:33:11.662809  223217 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0817 21:33:11.662816  223217 command_runner.go:130] >       "repoTags": [
	I0817 21:33:11.662821  223217 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0817 21:33:11.662827  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662831  223217 command_runner.go:130] >       "repoDigests": [
	I0817 21:33:11.662838  223217 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0817 21:33:11.662847  223217 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0817 21:33:11.662851  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662855  223217 command_runner.go:130] >       "size": "297083935",
	I0817 21:33:11.662859  223217 command_runner.go:130] >       "uid": {
	I0817 21:33:11.662863  223217 command_runner.go:130] >         "value": "0"
	I0817 21:33:11.662877  223217 command_runner.go:130] >       },
	I0817 21:33:11.662883  223217 command_runner.go:130] >       "username": "",
	I0817 21:33:11.662887  223217 command_runner.go:130] >       "spec": null
	I0817 21:33:11.662891  223217 command_runner.go:130] >     },
	I0817 21:33:11.662895  223217 command_runner.go:130] >     {
	I0817 21:33:11.662904  223217 command_runner.go:130] >       "id": "e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c",
	I0817 21:33:11.662911  223217 command_runner.go:130] >       "repoTags": [
	I0817 21:33:11.662918  223217 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0817 21:33:11.662922  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662927  223217 command_runner.go:130] >       "repoDigests": [
	I0817 21:33:11.662934  223217 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0817 21:33:11.662962  223217 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"
	I0817 21:33:11.662977  223217 command_runner.go:130] >       ],
	I0817 21:33:11.662981  223217 command_runner.go:130] >       "size": "122078160",
	I0817 21:33:11.662984  223217 command_runner.go:130] >       "uid": {
	I0817 21:33:11.662988  223217 command_runner.go:130] >         "value": "0"
	I0817 21:33:11.662992  223217 command_runner.go:130] >       },
	I0817 21:33:11.662996  223217 command_runner.go:130] >       "username": "",
	I0817 21:33:11.663000  223217 command_runner.go:130] >       "spec": null
	I0817 21:33:11.663004  223217 command_runner.go:130] >     },
	I0817 21:33:11.663009  223217 command_runner.go:130] >     {
	I0817 21:33:11.663015  223217 command_runner.go:130] >       "id": "f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5",
	I0817 21:33:11.663021  223217 command_runner.go:130] >       "repoTags": [
	I0817 21:33:11.663027  223217 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0817 21:33:11.663032  223217 command_runner.go:130] >       ],
	I0817 21:33:11.663037  223217 command_runner.go:130] >       "repoDigests": [
	I0817 21:33:11.663044  223217 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0817 21:33:11.663056  223217 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"
	I0817 21:33:11.663062  223217 command_runner.go:130] >       ],
	I0817 21:33:11.663066  223217 command_runner.go:130] >       "size": "113931062",
	I0817 21:33:11.663074  223217 command_runner.go:130] >       "uid": {
	I0817 21:33:11.663078  223217 command_runner.go:130] >         "value": "0"
	I0817 21:33:11.663083  223217 command_runner.go:130] >       },
	I0817 21:33:11.663087  223217 command_runner.go:130] >       "username": "",
	I0817 21:33:11.663093  223217 command_runner.go:130] >       "spec": null
	I0817 21:33:11.663097  223217 command_runner.go:130] >     },
	I0817 21:33:11.663100  223217 command_runner.go:130] >     {
	I0817 21:33:11.663107  223217 command_runner.go:130] >       "id": "6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4",
	I0817 21:33:11.663112  223217 command_runner.go:130] >       "repoTags": [
	I0817 21:33:11.663117  223217 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0817 21:33:11.663123  223217 command_runner.go:130] >       ],
	I0817 21:33:11.663127  223217 command_runner.go:130] >       "repoDigests": [
	I0817 21:33:11.663134  223217 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0817 21:33:11.663143  223217 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"
	I0817 21:33:11.663147  223217 command_runner.go:130] >       ],
	I0817 21:33:11.663154  223217 command_runner.go:130] >       "size": "72714135",
	I0817 21:33:11.663158  223217 command_runner.go:130] >       "uid": null,
	I0817 21:33:11.663162  223217 command_runner.go:130] >       "username": "",
	I0817 21:33:11.663166  223217 command_runner.go:130] >       "spec": null
	I0817 21:33:11.663170  223217 command_runner.go:130] >     },
	I0817 21:33:11.663174  223217 command_runner.go:130] >     {
	I0817 21:33:11.663180  223217 command_runner.go:130] >       "id": "98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16",
	I0817 21:33:11.663186  223217 command_runner.go:130] >       "repoTags": [
	I0817 21:33:11.663191  223217 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0817 21:33:11.663195  223217 command_runner.go:130] >       ],
	I0817 21:33:11.663199  223217 command_runner.go:130] >       "repoDigests": [
	I0817 21:33:11.663207  223217 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af",
	I0817 21:33:11.663249  223217 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"
	I0817 21:33:11.663261  223217 command_runner.go:130] >       ],
	I0817 21:33:11.663265  223217 command_runner.go:130] >       "size": "59814710",
	I0817 21:33:11.663269  223217 command_runner.go:130] >       "uid": {
	I0817 21:33:11.663273  223217 command_runner.go:130] >         "value": "0"
	I0817 21:33:11.663277  223217 command_runner.go:130] >       },
	I0817 21:33:11.663280  223217 command_runner.go:130] >       "username": "",
	I0817 21:33:11.663284  223217 command_runner.go:130] >       "spec": null
	I0817 21:33:11.663288  223217 command_runner.go:130] >     },
	I0817 21:33:11.663292  223217 command_runner.go:130] >     {
	I0817 21:33:11.663298  223217 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0817 21:33:11.663305  223217 command_runner.go:130] >       "repoTags": [
	I0817 21:33:11.663309  223217 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0817 21:33:11.663314  223217 command_runner.go:130] >       ],
	I0817 21:33:11.663318  223217 command_runner.go:130] >       "repoDigests": [
	I0817 21:33:11.663327  223217 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0817 21:33:11.663334  223217 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0817 21:33:11.663342  223217 command_runner.go:130] >       ],
	I0817 21:33:11.663346  223217 command_runner.go:130] >       "size": "750414",
	I0817 21:33:11.663350  223217 command_runner.go:130] >       "uid": {
	I0817 21:33:11.663355  223217 command_runner.go:130] >         "value": "65535"
	I0817 21:33:11.663359  223217 command_runner.go:130] >       },
	I0817 21:33:11.663363  223217 command_runner.go:130] >       "username": "",
	I0817 21:33:11.663370  223217 command_runner.go:130] >       "spec": null
	I0817 21:33:11.663373  223217 command_runner.go:130] >     }
	I0817 21:33:11.663377  223217 command_runner.go:130] >   ]
	I0817 21:33:11.663380  223217 command_runner.go:130] > }
	I0817 21:33:11.663876  223217 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 21:33:11.663898  223217 cache_images.go:84] Images are preloaded, skipping loading
	I0817 21:33:11.663991  223217 ssh_runner.go:195] Run: crio config
	I0817 21:33:11.714750  223217 command_runner.go:130] ! time="2023-08-17 21:33:11.704859116Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0817 21:33:11.714785  223217 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0817 21:33:11.722414  223217 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0817 21:33:11.722445  223217 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0817 21:33:11.722455  223217 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0817 21:33:11.722460  223217 command_runner.go:130] > #
	I0817 21:33:11.722481  223217 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0817 21:33:11.722498  223217 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0817 21:33:11.722507  223217 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0817 21:33:11.722518  223217 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0817 21:33:11.722523  223217 command_runner.go:130] > # reload'.
	I0817 21:33:11.722534  223217 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0817 21:33:11.722545  223217 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0817 21:33:11.722554  223217 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0817 21:33:11.722563  223217 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0817 21:33:11.722574  223217 command_runner.go:130] > [crio]
	I0817 21:33:11.722584  223217 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0817 21:33:11.722595  223217 command_runner.go:130] > # containers images, in this directory.
	I0817 21:33:11.722606  223217 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0817 21:33:11.722625  223217 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0817 21:33:11.722636  223217 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0817 21:33:11.722648  223217 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0817 21:33:11.722659  223217 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0817 21:33:11.722669  223217 command_runner.go:130] > storage_driver = "overlay"
	I0817 21:33:11.722678  223217 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0817 21:33:11.722684  223217 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0817 21:33:11.722698  223217 command_runner.go:130] > storage_option = [
	I0817 21:33:11.722704  223217 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0817 21:33:11.722708  223217 command_runner.go:130] > ]
	I0817 21:33:11.722718  223217 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0817 21:33:11.722724  223217 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0817 21:33:11.722731  223217 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0817 21:33:11.722742  223217 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0817 21:33:11.722750  223217 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0817 21:33:11.722755  223217 command_runner.go:130] > # always happen on a node reboot
	I0817 21:33:11.722762  223217 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0817 21:33:11.722768  223217 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0817 21:33:11.722773  223217 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0817 21:33:11.722784  223217 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0817 21:33:11.722789  223217 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0817 21:33:11.722797  223217 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0817 21:33:11.722812  223217 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0817 21:33:11.722821  223217 command_runner.go:130] > # internal_wipe = true
	I0817 21:33:11.722830  223217 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0817 21:33:11.722843  223217 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0817 21:33:11.722854  223217 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0817 21:33:11.722863  223217 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0817 21:33:11.722875  223217 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0817 21:33:11.722884  223217 command_runner.go:130] > [crio.api]
	I0817 21:33:11.722896  223217 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0817 21:33:11.722908  223217 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0817 21:33:11.722920  223217 command_runner.go:130] > # IP address on which the stream server will listen.
	I0817 21:33:11.722931  223217 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0817 21:33:11.722944  223217 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0817 21:33:11.722956  223217 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0817 21:33:11.722965  223217 command_runner.go:130] > # stream_port = "0"
	I0817 21:33:11.722977  223217 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0817 21:33:11.722986  223217 command_runner.go:130] > # stream_enable_tls = false
	I0817 21:33:11.722994  223217 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0817 21:33:11.723001  223217 command_runner.go:130] > # stream_idle_timeout = ""
	I0817 21:33:11.723007  223217 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0817 21:33:11.723017  223217 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0817 21:33:11.723029  223217 command_runner.go:130] > # minutes.
	I0817 21:33:11.723038  223217 command_runner.go:130] > # stream_tls_cert = ""
	I0817 21:33:11.723052  223217 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0817 21:33:11.723065  223217 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0817 21:33:11.723075  223217 command_runner.go:130] > # stream_tls_key = ""
	I0817 21:33:11.723088  223217 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0817 21:33:11.723101  223217 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0817 21:33:11.723113  223217 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0817 21:33:11.723123  223217 command_runner.go:130] > # stream_tls_ca = ""
	I0817 21:33:11.723134  223217 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:33:11.723145  223217 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0817 21:33:11.723156  223217 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:33:11.723167  223217 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0817 21:33:11.723203  223217 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0817 21:33:11.723217  223217 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0817 21:33:11.723223  223217 command_runner.go:130] > [crio.runtime]
	I0817 21:33:11.723233  223217 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0817 21:33:11.723242  223217 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0817 21:33:11.723252  223217 command_runner.go:130] > # "nofile=1024:2048"
	I0817 21:33:11.723262  223217 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0817 21:33:11.723272  223217 command_runner.go:130] > # default_ulimits = [
	I0817 21:33:11.723280  223217 command_runner.go:130] > # ]
	I0817 21:33:11.723289  223217 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0817 21:33:11.723295  223217 command_runner.go:130] > # no_pivot = false
	I0817 21:33:11.723301  223217 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0817 21:33:11.723310  223217 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0817 21:33:11.723318  223217 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0817 21:33:11.723324  223217 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0817 21:33:11.723331  223217 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0817 21:33:11.723337  223217 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:33:11.723344  223217 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0817 21:33:11.723348  223217 command_runner.go:130] > # Cgroup setting for conmon
	I0817 21:33:11.723357  223217 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0817 21:33:11.723363  223217 command_runner.go:130] > conmon_cgroup = "pod"
	I0817 21:33:11.723370  223217 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0817 21:33:11.723377  223217 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0817 21:33:11.723385  223217 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:33:11.723391  223217 command_runner.go:130] > conmon_env = [
	I0817 21:33:11.723397  223217 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0817 21:33:11.723402  223217 command_runner.go:130] > ]
	I0817 21:33:11.723407  223217 command_runner.go:130] > # Additional environment variables to set for all the
	I0817 21:33:11.723416  223217 command_runner.go:130] > # containers. These are overridden if set in the
	I0817 21:33:11.723427  223217 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0817 21:33:11.723434  223217 command_runner.go:130] > # default_env = [
	I0817 21:33:11.723437  223217 command_runner.go:130] > # ]
	I0817 21:33:11.723445  223217 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0817 21:33:11.723450  223217 command_runner.go:130] > # selinux = false
	I0817 21:33:11.723456  223217 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0817 21:33:11.723465  223217 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0817 21:33:11.723473  223217 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0817 21:33:11.723477  223217 command_runner.go:130] > # seccomp_profile = ""
	I0817 21:33:11.723485  223217 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0817 21:33:11.723491  223217 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0817 21:33:11.723499  223217 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0817 21:33:11.723506  223217 command_runner.go:130] > # which might increase security.
	I0817 21:33:11.723511  223217 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0817 21:33:11.723519  223217 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0817 21:33:11.723527  223217 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0817 21:33:11.723535  223217 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0817 21:33:11.723543  223217 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0817 21:33:11.723548  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:33:11.723555  223217 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0817 21:33:11.723560  223217 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0817 21:33:11.723567  223217 command_runner.go:130] > # the cgroup blockio controller.
	I0817 21:33:11.723571  223217 command_runner.go:130] > # blockio_config_file = ""
	I0817 21:33:11.723577  223217 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0817 21:33:11.723584  223217 command_runner.go:130] > # irqbalance daemon.
	I0817 21:33:11.723589  223217 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0817 21:33:11.723597  223217 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0817 21:33:11.723605  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:33:11.723612  223217 command_runner.go:130] > # rdt_config_file = ""
	I0817 21:33:11.723617  223217 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0817 21:33:11.723625  223217 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0817 21:33:11.723633  223217 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0817 21:33:11.723638  223217 command_runner.go:130] > # separate_pull_cgroup = ""
	I0817 21:33:11.723644  223217 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0817 21:33:11.723652  223217 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0817 21:33:11.723658  223217 command_runner.go:130] > # will be added.
	I0817 21:33:11.723662  223217 command_runner.go:130] > # default_capabilities = [
	I0817 21:33:11.723668  223217 command_runner.go:130] > # 	"CHOWN",
	I0817 21:33:11.723672  223217 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0817 21:33:11.723682  223217 command_runner.go:130] > # 	"FSETID",
	I0817 21:33:11.723688  223217 command_runner.go:130] > # 	"FOWNER",
	I0817 21:33:11.723692  223217 command_runner.go:130] > # 	"SETGID",
	I0817 21:33:11.723697  223217 command_runner.go:130] > # 	"SETUID",
	I0817 21:33:11.723702  223217 command_runner.go:130] > # 	"SETPCAP",
	I0817 21:33:11.723708  223217 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0817 21:33:11.723712  223217 command_runner.go:130] > # 	"KILL",
	I0817 21:33:11.723718  223217 command_runner.go:130] > # ]
	I0817 21:33:11.723724  223217 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0817 21:33:11.723732  223217 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:33:11.723737  223217 command_runner.go:130] > # default_sysctls = [
	I0817 21:33:11.723740  223217 command_runner.go:130] > # ]
	I0817 21:33:11.723748  223217 command_runner.go:130] > # List of devices on the host that a
	I0817 21:33:11.723754  223217 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0817 21:33:11.723760  223217 command_runner.go:130] > # allowed_devices = [
	I0817 21:33:11.723764  223217 command_runner.go:130] > # 	"/dev/fuse",
	I0817 21:33:11.723770  223217 command_runner.go:130] > # ]
	I0817 21:33:11.723775  223217 command_runner.go:130] > # List of additional devices. specified as
	I0817 21:33:11.723785  223217 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0817 21:33:11.723790  223217 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0817 21:33:11.723810  223217 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:33:11.723825  223217 command_runner.go:130] > # additional_devices = [
	I0817 21:33:11.723828  223217 command_runner.go:130] > # ]
	I0817 21:33:11.723833  223217 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0817 21:33:11.723837  223217 command_runner.go:130] > # cdi_spec_dirs = [
	I0817 21:33:11.723843  223217 command_runner.go:130] > # 	"/etc/cdi",
	I0817 21:33:11.723849  223217 command_runner.go:130] > # 	"/var/run/cdi",
	I0817 21:33:11.723855  223217 command_runner.go:130] > # ]
	I0817 21:33:11.723863  223217 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0817 21:33:11.723876  223217 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0817 21:33:11.723881  223217 command_runner.go:130] > # Defaults to false.
	I0817 21:33:11.723889  223217 command_runner.go:130] > # device_ownership_from_security_context = false
	I0817 21:33:11.723899  223217 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0817 21:33:11.723904  223217 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0817 21:33:11.723910  223217 command_runner.go:130] > # hooks_dir = [
	I0817 21:33:11.723915  223217 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0817 21:33:11.723921  223217 command_runner.go:130] > # ]
	I0817 21:33:11.723927  223217 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0817 21:33:11.723936  223217 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0817 21:33:11.723943  223217 command_runner.go:130] > # its default mounts from the following two files:
	I0817 21:33:11.723946  223217 command_runner.go:130] > #
	I0817 21:33:11.723955  223217 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0817 21:33:11.723961  223217 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0817 21:33:11.723968  223217 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0817 21:33:11.723974  223217 command_runner.go:130] > #
	I0817 21:33:11.723983  223217 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0817 21:33:11.723991  223217 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0817 21:33:11.723999  223217 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0817 21:33:11.724007  223217 command_runner.go:130] > #      only add mounts it finds in this file.
	I0817 21:33:11.724010  223217 command_runner.go:130] > #
	I0817 21:33:11.724017  223217 command_runner.go:130] > # default_mounts_file = ""
	I0817 21:33:11.724022  223217 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0817 21:33:11.724031  223217 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0817 21:33:11.724037  223217 command_runner.go:130] > pids_limit = 1024
	I0817 21:33:11.724047  223217 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0817 21:33:11.724060  223217 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0817 21:33:11.724074  223217 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0817 21:33:11.724088  223217 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0817 21:33:11.724097  223217 command_runner.go:130] > # log_size_max = -1
	I0817 21:33:11.724108  223217 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0817 21:33:11.724118  223217 command_runner.go:130] > # log_to_journald = false
	I0817 21:33:11.724128  223217 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0817 21:33:11.724139  223217 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0817 21:33:11.724150  223217 command_runner.go:130] > # Path to directory for container attach sockets.
	I0817 21:33:11.724161  223217 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0817 21:33:11.724170  223217 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0817 21:33:11.724174  223217 command_runner.go:130] > # bind_mount_prefix = ""
	I0817 21:33:11.724182  223217 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0817 21:33:11.724186  223217 command_runner.go:130] > # read_only = false
	I0817 21:33:11.724199  223217 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0817 21:33:11.724207  223217 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0817 21:33:11.724212  223217 command_runner.go:130] > # live configuration reload.
	I0817 21:33:11.724218  223217 command_runner.go:130] > # log_level = "info"
	I0817 21:33:11.724224  223217 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0817 21:33:11.724231  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:33:11.724236  223217 command_runner.go:130] > # log_filter = ""
	I0817 21:33:11.724244  223217 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0817 21:33:11.724250  223217 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0817 21:33:11.724257  223217 command_runner.go:130] > # separated by comma.
	I0817 21:33:11.724261  223217 command_runner.go:130] > # uid_mappings = ""
	I0817 21:33:11.724268  223217 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0817 21:33:11.724274  223217 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0817 21:33:11.724284  223217 command_runner.go:130] > # separated by comma.
	I0817 21:33:11.724289  223217 command_runner.go:130] > # gid_mappings = ""
	I0817 21:33:11.724297  223217 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0817 21:33:11.724306  223217 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:33:11.724314  223217 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:33:11.724321  223217 command_runner.go:130] > # minimum_mappable_uid = -1
	I0817 21:33:11.724327  223217 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0817 21:33:11.724335  223217 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:33:11.724344  223217 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:33:11.724351  223217 command_runner.go:130] > # minimum_mappable_gid = -1
	I0817 21:33:11.724360  223217 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0817 21:33:11.724368  223217 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0817 21:33:11.724374  223217 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0817 21:33:11.724380  223217 command_runner.go:130] > # ctr_stop_timeout = 30
	I0817 21:33:11.724386  223217 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0817 21:33:11.724395  223217 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0817 21:33:11.724402  223217 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0817 21:33:11.724408  223217 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0817 21:33:11.724418  223217 command_runner.go:130] > drop_infra_ctr = false
	I0817 21:33:11.724426  223217 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0817 21:33:11.724434  223217 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0817 21:33:11.724443  223217 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0817 21:33:11.724451  223217 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0817 21:33:11.724458  223217 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0817 21:33:11.724465  223217 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0817 21:33:11.724469  223217 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0817 21:33:11.724478  223217 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0817 21:33:11.724485  223217 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0817 21:33:11.724492  223217 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0817 21:33:11.724500  223217 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0817 21:33:11.724509  223217 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0817 21:33:11.724515  223217 command_runner.go:130] > # default_runtime = "runc"
	I0817 21:33:11.724521  223217 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0817 21:33:11.724530  223217 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0817 21:33:11.724541  223217 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0817 21:33:11.724551  223217 command_runner.go:130] > # creation as a file is not desired either.
	I0817 21:33:11.724561  223217 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0817 21:33:11.724568  223217 command_runner.go:130] > # the hostname is being managed dynamically.
	I0817 21:33:11.724573  223217 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0817 21:33:11.724579  223217 command_runner.go:130] > # ]
	I0817 21:33:11.724585  223217 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0817 21:33:11.724594  223217 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0817 21:33:11.724603  223217 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0817 21:33:11.724611  223217 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0817 21:33:11.724618  223217 command_runner.go:130] > #
	I0817 21:33:11.724622  223217 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0817 21:33:11.724630  223217 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0817 21:33:11.724636  223217 command_runner.go:130] > #  runtime_type = "oci"
	I0817 21:33:11.724641  223217 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0817 21:33:11.724648  223217 command_runner.go:130] > #  privileged_without_host_devices = false
	I0817 21:33:11.724652  223217 command_runner.go:130] > #  allowed_annotations = []
	I0817 21:33:11.724658  223217 command_runner.go:130] > # Where:
	I0817 21:33:11.724664  223217 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0817 21:33:11.724673  223217 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0817 21:33:11.724682  223217 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0817 21:33:11.724690  223217 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0817 21:33:11.724698  223217 command_runner.go:130] > #   in $PATH.
	I0817 21:33:11.724704  223217 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0817 21:33:11.724711  223217 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0817 21:33:11.724717  223217 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0817 21:33:11.724723  223217 command_runner.go:130] > #   state.
	I0817 21:33:11.724729  223217 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0817 21:33:11.724737  223217 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0817 21:33:11.724743  223217 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0817 21:33:11.724751  223217 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0817 21:33:11.724759  223217 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0817 21:33:11.724768  223217 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0817 21:33:11.724773  223217 command_runner.go:130] > #   The currently recognized values are:
	I0817 21:33:11.724782  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0817 21:33:11.724788  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0817 21:33:11.724797  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0817 21:33:11.724808  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0817 21:33:11.724817  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0817 21:33:11.724826  223217 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0817 21:33:11.724834  223217 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0817 21:33:11.724844  223217 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0817 21:33:11.724851  223217 command_runner.go:130] > #   should be moved to the container's cgroup
	I0817 21:33:11.724855  223217 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0817 21:33:11.724863  223217 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0817 21:33:11.724869  223217 command_runner.go:130] > runtime_type = "oci"
	I0817 21:33:11.724879  223217 command_runner.go:130] > runtime_root = "/run/runc"
	I0817 21:33:11.724888  223217 command_runner.go:130] > runtime_config_path = ""
	I0817 21:33:11.724897  223217 command_runner.go:130] > monitor_path = ""
	I0817 21:33:11.724905  223217 command_runner.go:130] > monitor_cgroup = ""
	I0817 21:33:11.724909  223217 command_runner.go:130] > monitor_exec_cgroup = ""
	I0817 21:33:11.724917  223217 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0817 21:33:11.724923  223217 command_runner.go:130] > # running containers
	I0817 21:33:11.724928  223217 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0817 21:33:11.724936  223217 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0817 21:33:11.724985  223217 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0817 21:33:11.724994  223217 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0817 21:33:11.725002  223217 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0817 21:33:11.725007  223217 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0817 21:33:11.725014  223217 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0817 21:33:11.725019  223217 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0817 21:33:11.725026  223217 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0817 21:33:11.725030  223217 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0817 21:33:11.725038  223217 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0817 21:33:11.725050  223217 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0817 21:33:11.725063  223217 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0817 21:33:11.725080  223217 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0817 21:33:11.725095  223217 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0817 21:33:11.725107  223217 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0817 21:33:11.725124  223217 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0817 21:33:11.725143  223217 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0817 21:33:11.725156  223217 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0817 21:33:11.725170  223217 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0817 21:33:11.725183  223217 command_runner.go:130] > # Example:
	I0817 21:33:11.725201  223217 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0817 21:33:11.725209  223217 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0817 21:33:11.725215  223217 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0817 21:33:11.725222  223217 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0817 21:33:11.725226  223217 command_runner.go:130] > # cpuset = 0
	I0817 21:33:11.725232  223217 command_runner.go:130] > # cpushares = "0-1"
	I0817 21:33:11.725236  223217 command_runner.go:130] > # Where:
	I0817 21:33:11.725243  223217 command_runner.go:130] > # The workload name is workload-type.
	I0817 21:33:11.725253  223217 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0817 21:33:11.725261  223217 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0817 21:33:11.725266  223217 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0817 21:33:11.725274  223217 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0817 21:33:11.725282  223217 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0817 21:33:11.725286  223217 command_runner.go:130] > # 
	I0817 21:33:11.725294  223217 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0817 21:33:11.725298  223217 command_runner.go:130] > #
	I0817 21:33:11.725306  223217 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0817 21:33:11.725313  223217 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0817 21:33:11.725322  223217 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0817 21:33:11.725330  223217 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0817 21:33:11.725338  223217 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0817 21:33:11.725345  223217 command_runner.go:130] > [crio.image]
	I0817 21:33:11.725351  223217 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0817 21:33:11.725356  223217 command_runner.go:130] > # default_transport = "docker://"
	I0817 21:33:11.725362  223217 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0817 21:33:11.725371  223217 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:33:11.725377  223217 command_runner.go:130] > # global_auth_file = ""
	I0817 21:33:11.725383  223217 command_runner.go:130] > # The image used to instantiate infra containers.
	I0817 21:33:11.725390  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:33:11.725394  223217 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0817 21:33:11.725403  223217 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0817 21:33:11.725412  223217 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:33:11.725417  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:33:11.725424  223217 command_runner.go:130] > # pause_image_auth_file = ""
	I0817 21:33:11.725430  223217 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0817 21:33:11.725439  223217 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0817 21:33:11.725448  223217 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0817 21:33:11.725453  223217 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0817 21:33:11.725457  223217 command_runner.go:130] > # pause_command = "/pause"
	I0817 21:33:11.725463  223217 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0817 21:33:11.725469  223217 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0817 21:33:11.725474  223217 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0817 21:33:11.725480  223217 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0817 21:33:11.725485  223217 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0817 21:33:11.725488  223217 command_runner.go:130] > # signature_policy = ""
	I0817 21:33:11.725494  223217 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0817 21:33:11.725500  223217 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0817 21:33:11.725504  223217 command_runner.go:130] > # changing them here.
	I0817 21:33:11.725508  223217 command_runner.go:130] > # insecure_registries = [
	I0817 21:33:11.725512  223217 command_runner.go:130] > # ]
	I0817 21:33:11.725523  223217 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0817 21:33:11.725527  223217 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0817 21:33:11.725532  223217 command_runner.go:130] > # image_volumes = "mkdir"
	I0817 21:33:11.725538  223217 command_runner.go:130] > # Temporary directory to use for storing big files
	I0817 21:33:11.725542  223217 command_runner.go:130] > # big_files_temporary_dir = ""
	I0817 21:33:11.725548  223217 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0817 21:33:11.725551  223217 command_runner.go:130] > # CNI plugins.
	I0817 21:33:11.725555  223217 command_runner.go:130] > [crio.network]
	I0817 21:33:11.725560  223217 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0817 21:33:11.725565  223217 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0817 21:33:11.725569  223217 command_runner.go:130] > # cni_default_network = ""
	I0817 21:33:11.725574  223217 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0817 21:33:11.725578  223217 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0817 21:33:11.725584  223217 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0817 21:33:11.725587  223217 command_runner.go:130] > # plugin_dirs = [
	I0817 21:33:11.725591  223217 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0817 21:33:11.725594  223217 command_runner.go:130] > # ]
	I0817 21:33:11.725599  223217 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0817 21:33:11.725602  223217 command_runner.go:130] > [crio.metrics]
	I0817 21:33:11.725607  223217 command_runner.go:130] > # Globally enable or disable metrics support.
	I0817 21:33:11.725610  223217 command_runner.go:130] > enable_metrics = true
	I0817 21:33:11.725615  223217 command_runner.go:130] > # Specify enabled metrics collectors.
	I0817 21:33:11.725622  223217 command_runner.go:130] > # Per default all metrics are enabled.
	I0817 21:33:11.725628  223217 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0817 21:33:11.725636  223217 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0817 21:33:11.725644  223217 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0817 21:33:11.725650  223217 command_runner.go:130] > # metrics_collectors = [
	I0817 21:33:11.725657  223217 command_runner.go:130] > # 	"operations",
	I0817 21:33:11.725662  223217 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0817 21:33:11.725668  223217 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0817 21:33:11.725672  223217 command_runner.go:130] > # 	"operations_errors",
	I0817 21:33:11.725676  223217 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0817 21:33:11.725680  223217 command_runner.go:130] > # 	"image_pulls_by_name",
	I0817 21:33:11.725684  223217 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0817 21:33:11.725688  223217 command_runner.go:130] > # 	"image_pulls_failures",
	I0817 21:33:11.725695  223217 command_runner.go:130] > # 	"image_pulls_successes",
	I0817 21:33:11.725699  223217 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0817 21:33:11.725705  223217 command_runner.go:130] > # 	"image_layer_reuse",
	I0817 21:33:11.725709  223217 command_runner.go:130] > # 	"containers_oom_total",
	I0817 21:33:11.725716  223217 command_runner.go:130] > # 	"containers_oom",
	I0817 21:33:11.725720  223217 command_runner.go:130] > # 	"processes_defunct",
	I0817 21:33:11.725725  223217 command_runner.go:130] > # 	"operations_total",
	I0817 21:33:11.725729  223217 command_runner.go:130] > # 	"operations_latency_seconds",
	I0817 21:33:11.725733  223217 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0817 21:33:11.725738  223217 command_runner.go:130] > # 	"operations_errors_total",
	I0817 21:33:11.725744  223217 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0817 21:33:11.725748  223217 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0817 21:33:11.725755  223217 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0817 21:33:11.725759  223217 command_runner.go:130] > # 	"image_pulls_success_total",
	I0817 21:33:11.725766  223217 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0817 21:33:11.725770  223217 command_runner.go:130] > # 	"containers_oom_count_total",
	I0817 21:33:11.725776  223217 command_runner.go:130] > # ]
	I0817 21:33:11.725781  223217 command_runner.go:130] > # The port on which the metrics server will listen.
	I0817 21:33:11.725788  223217 command_runner.go:130] > # metrics_port = 9090
	I0817 21:33:11.725793  223217 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0817 21:33:11.725800  223217 command_runner.go:130] > # metrics_socket = ""
	I0817 21:33:11.725805  223217 command_runner.go:130] > # The certificate for the secure metrics server.
	I0817 21:33:11.725813  223217 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0817 21:33:11.725823  223217 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0817 21:33:11.725831  223217 command_runner.go:130] > # certificate on any modification event.
	I0817 21:33:11.725835  223217 command_runner.go:130] > # metrics_cert = ""
	I0817 21:33:11.725845  223217 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0817 21:33:11.725856  223217 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0817 21:33:11.725864  223217 command_runner.go:130] > # metrics_key = ""
	I0817 21:33:11.725877  223217 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0817 21:33:11.725884  223217 command_runner.go:130] > [crio.tracing]
	I0817 21:33:11.725890  223217 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0817 21:33:11.725897  223217 command_runner.go:130] > # enable_tracing = false
	I0817 21:33:11.725902  223217 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0817 21:33:11.725909  223217 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0817 21:33:11.725914  223217 command_runner.go:130] > # Number of samples to collect per million spans.
	I0817 21:33:11.725924  223217 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0817 21:33:11.725932  223217 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0817 21:33:11.725938  223217 command_runner.go:130] > [crio.stats]
	I0817 21:33:11.725944  223217 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0817 21:33:11.725953  223217 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0817 21:33:11.725959  223217 command_runner.go:130] > # stats_collection_period = 0
	I0817 21:33:11.726066  223217 cni.go:84] Creating CNI manager for ""
	I0817 21:33:11.726084  223217 cni.go:136] 1 nodes found, recommending kindnet
	I0817 21:33:11.726108  223217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:33:11.726140  223217 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-959371 NodeName:multinode-959371 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:33:11.726351  223217 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-959371"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:33:11.726425  223217 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-959371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:33:11.726484  223217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:33:11.736117  223217 command_runner.go:130] > kubeadm
	I0817 21:33:11.736155  223217 command_runner.go:130] > kubectl
	I0817 21:33:11.736162  223217 command_runner.go:130] > kubelet
	I0817 21:33:11.736191  223217 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:33:11.736248  223217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:33:11.745638  223217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0817 21:33:11.762976  223217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:33:11.781827  223217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0817 21:33:11.799620  223217 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0817 21:33:11.803887  223217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:33:11.817915  223217 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371 for IP: 192.168.39.104
	I0817 21:33:11.817959  223217 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:33:11.818214  223217 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 21:33:11.818299  223217 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 21:33:11.818356  223217 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key
	I0817 21:33:11.818370  223217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt with IP's: []
	I0817 21:33:11.888437  223217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt ...
	I0817 21:33:11.888478  223217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt: {Name:mk20af1a9d50c64b626c13a85c5fcaa12d63ddf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:33:11.888653  223217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key ...
	I0817 21:33:11.888664  223217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key: {Name:mk55a094c66863515cdb4ea2e630941e3069a3b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:33:11.888746  223217 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key.a10f9b59
	I0817 21:33:11.888763  223217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.crt.a10f9b59 with IP's: [192.168.39.104 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 21:33:12.356758  223217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.crt.a10f9b59 ...
	I0817 21:33:12.356794  223217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.crt.a10f9b59: {Name:mk646f6c32a9c8d69856702e0d0b59ac09559415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:33:12.357016  223217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key.a10f9b59 ...
	I0817 21:33:12.357037  223217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key.a10f9b59: {Name:mkd75633134609423f881597740e044833fbdeb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:33:12.357138  223217 certs.go:337] copying /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.crt.a10f9b59 -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.crt
	I0817 21:33:12.357227  223217 certs.go:341] copying /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key.a10f9b59 -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key
	I0817 21:33:12.357302  223217 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.key
	I0817 21:33:12.357322  223217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.crt with IP's: []
	I0817 21:33:12.727857  223217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.crt ...
	I0817 21:33:12.727893  223217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.crt: {Name:mkfb194092c7056da830f8e2700145de476defd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:33:12.747177  223217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.key ...
	I0817 21:33:12.747217  223217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.key: {Name:mk154964e7786393f896a17070e199b52b2f0630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:33:12.747351  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0817 21:33:12.747375  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0817 21:33:12.747385  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0817 21:33:12.747395  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0817 21:33:12.747405  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:33:12.747420  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:33:12.747441  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:33:12.747453  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:33:12.747514  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 21:33:12.747552  223217 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 21:33:12.747562  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:33:12.747582  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 21:33:12.747615  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:33:12.747643  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 21:33:12.747683  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:33:12.747708  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem -> /usr/share/ca-certificates/210670.pem
	I0817 21:33:12.747722  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /usr/share/ca-certificates/2106702.pem
	I0817 21:33:12.747735  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:33:12.748242  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:33:12.775814  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 21:33:12.801920  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:33:12.828771  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 21:33:12.854520  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:33:12.879711  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:33:12.904366  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:33:12.931399  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:33:12.957803  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 21:33:12.983784  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 21:33:13.010096  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:33:13.035014  223217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:33:13.053584  223217 ssh_runner.go:195] Run: openssl version
	I0817 21:33:13.059426  223217 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0817 21:33:13.059530  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 21:33:13.069884  223217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 21:33:13.074981  223217 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:33:13.075061  223217 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:33:13.075135  223217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 21:33:13.080821  223217 command_runner.go:130] > 51391683
	I0817 21:33:13.081088  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 21:33:13.091433  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 21:33:13.101963  223217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 21:33:13.107214  223217 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:33:13.107253  223217 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:33:13.107319  223217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 21:33:13.113216  223217 command_runner.go:130] > 3ec20f2e
	I0817 21:33:13.113536  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:33:13.126997  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:33:13.138013  223217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:33:13.143349  223217 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:33:13.143469  223217 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:33:13.143541  223217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:33:13.149537  223217 command_runner.go:130] > b5213941
	I0817 21:33:13.149635  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:33:13.160256  223217 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:33:13.164903  223217 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:33:13.164976  223217 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:33:13.165031  223217 kubeadm.go:404] StartCluster: {Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:33:13.165204  223217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 21:33:13.165256  223217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:33:13.201293  223217 cri.go:89] found id: ""
	I0817 21:33:13.201383  223217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:33:13.210889  223217 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0817 21:33:13.210928  223217 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0817 21:33:13.210939  223217 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0817 21:33:13.211045  223217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:33:13.220536  223217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:33:13.229657  223217 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0817 21:33:13.229700  223217 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0817 21:33:13.229707  223217 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0817 21:33:13.229716  223217 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:33:13.229757  223217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:33:13.229829  223217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 21:33:13.344638  223217 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 21:33:13.344668  223217 command_runner.go:130] > [init] Using Kubernetes version: v1.27.4
	I0817 21:33:13.344702  223217 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 21:33:13.344709  223217 command_runner.go:130] > [preflight] Running pre-flight checks
	I0817 21:33:13.611158  223217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:33:13.611196  223217 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:33:13.611322  223217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:33:13.611337  223217 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:33:13.611462  223217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:33:13.611479  223217 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:33:13.800722  223217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:33:13.949843  223217 out.go:204]   - Generating certificates and keys ...
	I0817 21:33:13.800800  223217 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:33:13.950067  223217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 21:33:13.950087  223217 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0817 21:33:13.950177  223217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 21:33:13.950203  223217 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0817 21:33:13.979332  223217 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:33:13.979369  223217 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:33:14.195666  223217 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:33:14.195706  223217 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:33:14.270821  223217 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0817 21:33:14.270851  223217 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0817 21:33:14.408774  223217 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0817 21:33:14.408807  223217 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0817 21:33:14.469550  223217 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0817 21:33:14.469579  223217 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0817 21:33:14.469760  223217 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-959371] and IPs [192.168.39.104 127.0.0.1 ::1]
	I0817 21:33:14.469776  223217 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-959371] and IPs [192.168.39.104 127.0.0.1 ::1]
	I0817 21:33:14.645679  223217 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0817 21:33:14.645722  223217 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0817 21:33:14.645861  223217 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-959371] and IPs [192.168.39.104 127.0.0.1 ::1]
	I0817 21:33:14.645878  223217 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-959371] and IPs [192.168.39.104 127.0.0.1 ::1]
	I0817 21:33:14.945686  223217 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:33:14.945716  223217 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:33:15.051443  223217 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:33:15.051479  223217 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:33:15.379857  223217 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0817 21:33:15.379895  223217 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0817 21:33:15.380004  223217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:33:15.380025  223217 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:33:15.461105  223217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:33:15.461141  223217 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:33:15.735055  223217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:33:15.735095  223217 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:33:15.898610  223217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:33:15.898650  223217 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:33:16.380245  223217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:33:16.380276  223217 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:33:16.395054  223217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:33:16.395077  223217 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:33:16.395931  223217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:33:16.395948  223217 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:33:16.396028  223217 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 21:33:16.396039  223217 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0817 21:33:16.525301  223217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:33:16.527511  223217 out.go:204]   - Booting up control plane ...
	I0817 21:33:16.525365  223217 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:33:16.527634  223217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:33:16.527651  223217 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:33:16.530280  223217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:33:16.530294  223217 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:33:16.531467  223217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:33:16.531482  223217 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:33:16.532396  223217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:33:16.532417  223217 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:33:16.537354  223217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:33:16.537381  223217 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:33:25.040097  223217 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504611 seconds
	I0817 21:33:25.040153  223217 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.504611 seconds
	I0817 21:33:25.040330  223217 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:33:25.040357  223217 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:33:25.068164  223217 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:33:25.068207  223217 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:33:25.609492  223217 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:33:25.609533  223217 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:33:25.609744  223217 kubeadm.go:322] [mark-control-plane] Marking the node multinode-959371 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 21:33:25.609761  223217 command_runner.go:130] > [mark-control-plane] Marking the node multinode-959371 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 21:33:26.126414  223217 kubeadm.go:322] [bootstrap-token] Using token: f37hvi.4zexbt98dbvfdrzi
	I0817 21:33:26.128430  223217 out.go:204]   - Configuring RBAC rules ...
	I0817 21:33:26.126503  223217 command_runner.go:130] > [bootstrap-token] Using token: f37hvi.4zexbt98dbvfdrzi
	I0817 21:33:26.128570  223217 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:33:26.128586  223217 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:33:26.137767  223217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:33:26.137817  223217 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:33:26.147632  223217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:33:26.147668  223217 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:33:26.156271  223217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:33:26.156297  223217 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:33:26.162163  223217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:33:26.162190  223217 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:33:26.169001  223217 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:33:26.169032  223217 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:33:26.189991  223217 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:33:26.190083  223217 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:33:26.445085  223217 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 21:33:26.445130  223217 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0817 21:33:26.546268  223217 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 21:33:26.546299  223217 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0817 21:33:26.547314  223217 kubeadm.go:322] 
	I0817 21:33:26.547407  223217 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 21:33:26.547428  223217 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0817 21:33:26.547433  223217 kubeadm.go:322] 
	I0817 21:33:26.547529  223217 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 21:33:26.547540  223217 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0817 21:33:26.547545  223217 kubeadm.go:322] 
	I0817 21:33:26.547579  223217 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 21:33:26.547588  223217 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0817 21:33:26.547664  223217 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:33:26.547674  223217 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:33:26.547736  223217 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:33:26.547746  223217 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:33:26.547751  223217 kubeadm.go:322] 
	I0817 21:33:26.547819  223217 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 21:33:26.547829  223217 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0817 21:33:26.547833  223217 kubeadm.go:322] 
	I0817 21:33:26.547904  223217 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 21:33:26.547918  223217 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 21:33:26.547924  223217 kubeadm.go:322] 
	I0817 21:33:26.547970  223217 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 21:33:26.547976  223217 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0817 21:33:26.548036  223217 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:33:26.548043  223217 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:33:26.548098  223217 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:33:26.548104  223217 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:33:26.548107  223217 kubeadm.go:322] 
	I0817 21:33:26.548181  223217 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:33:26.548188  223217 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:33:26.548311  223217 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 21:33:26.548334  223217 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0817 21:33:26.548341  223217 kubeadm.go:322] 
	I0817 21:33:26.548460  223217 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token f37hvi.4zexbt98dbvfdrzi \
	I0817 21:33:26.548473  223217 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token f37hvi.4zexbt98dbvfdrzi \
	I0817 21:33:26.548635  223217 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 21:33:26.548652  223217 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 21:33:26.548689  223217 kubeadm.go:322] 	--control-plane 
	I0817 21:33:26.548699  223217 command_runner.go:130] > 	--control-plane 
	I0817 21:33:26.548705  223217 kubeadm.go:322] 
	I0817 21:33:26.548854  223217 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:33:26.548865  223217 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:33:26.548870  223217 kubeadm.go:322] 
	I0817 21:33:26.548985  223217 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token f37hvi.4zexbt98dbvfdrzi \
	I0817 21:33:26.548999  223217 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token f37hvi.4zexbt98dbvfdrzi \
	I0817 21:33:26.549122  223217 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 21:33:26.549135  223217 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 21:33:26.549586  223217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:33:26.549600  223217 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:33:26.549613  223217 cni.go:84] Creating CNI manager for ""
	I0817 21:33:26.549639  223217 cni.go:136] 1 nodes found, recommending kindnet
	I0817 21:33:26.551710  223217 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 21:33:26.553348  223217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:33:26.565056  223217 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0817 21:33:26.565086  223217 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0817 21:33:26.565096  223217 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0817 21:33:26.565105  223217 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:33:26.565130  223217 command_runner.go:130] > Access: 2023-08-17 21:32:53.856618572 +0000
	I0817 21:33:26.565140  223217 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0817 21:33:26.565148  223217 command_runner.go:130] > Change: 2023-08-17 21:32:51.964618572 +0000
	I0817 21:33:26.565154  223217 command_runner.go:130] >  Birth: -
	I0817 21:33:26.565248  223217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:33:26.565266  223217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:33:26.592270  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:33:27.570297  223217 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0817 21:33:27.581942  223217 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0817 21:33:27.591865  223217 command_runner.go:130] > serviceaccount/kindnet created
	I0817 21:33:27.607878  223217 command_runner.go:130] > daemonset.apps/kindnet created
	I0817 21:33:27.610607  223217 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.018289556s)
	I0817 21:33:27.610671  223217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 21:33:27.610795  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:27.610804  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=multinode-959371 minikube.k8s.io/updated_at=2023_08_17T21_33_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:27.636974  223217 command_runner.go:130] > -16
	I0817 21:33:27.776828  223217 ops.go:34] apiserver oom_adj: -16
	I0817 21:33:27.776881  223217 command_runner.go:130] > node/multinode-959371 labeled
	I0817 21:33:27.812441  223217 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0817 21:33:27.814444  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:27.936577  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:27.936709  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:28.025917  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:28.526727  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:28.612422  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:29.027157  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:29.120486  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:29.526161  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:29.614945  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:30.026254  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:30.107935  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:30.526737  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:30.620025  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:31.027143  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:31.118258  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:31.526540  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:31.611957  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:32.026355  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:32.111667  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:32.526276  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:32.614424  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:33.027113  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:33.115903  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:33.526463  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:33.619593  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:34.027017  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:34.108348  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:34.526544  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:34.609294  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:35.026798  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:35.151455  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:35.526180  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:35.623906  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:36.026465  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:36.118224  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:36.526228  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:36.624558  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:37.026154  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:37.125083  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:37.526729  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:37.617938  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:38.026819  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:38.188470  223217 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:33:38.527035  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:33:38.664445  223217 command_runner.go:130] > NAME      SECRETS   AGE
	I0817 21:33:38.664470  223217 command_runner.go:130] > default   0         0s
	I0817 21:33:38.666010  223217 kubeadm.go:1081] duration metric: took 11.055298208s to wait for elevateKubeSystemPrivileges.
	I0817 21:33:38.666036  223217 kubeadm.go:406] StartCluster complete in 25.50101151s
	I0817 21:33:38.666076  223217 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:33:38.666171  223217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:33:38.667151  223217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:33:38.667434  223217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:33:38.667529  223217 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 21:33:38.667652  223217 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:33:38.667658  223217 addons.go:69] Setting storage-provisioner=true in profile "multinode-959371"
	I0817 21:33:38.667685  223217 addons.go:231] Setting addon storage-provisioner=true in "multinode-959371"
	I0817 21:33:38.667685  223217 addons.go:69] Setting default-storageclass=true in profile "multinode-959371"
	I0817 21:33:38.667724  223217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-959371"
	I0817 21:33:38.667753  223217 host.go:66] Checking if "multinode-959371" exists ...
	I0817 21:33:38.667791  223217 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:33:38.668094  223217 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:33:38.668251  223217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:33:38.668300  223217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:33:38.668388  223217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:33:38.668426  223217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:33:38.669044  223217 cert_rotation.go:137] Starting client certificate rotation controller
	I0817 21:33:38.669476  223217 round_trippers.go:463] GET https://192.168.39.104:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:33:38.669495  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:38.669507  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:38.669517  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:38.684075  223217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0817 21:33:38.684542  223217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33351
	I0817 21:33:38.684561  223217 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:33:38.684979  223217 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:33:38.685133  223217 main.go:141] libmachine: Using API Version  1
	I0817 21:33:38.685154  223217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:33:38.685471  223217 main.go:141] libmachine: Using API Version  1
	I0817 21:33:38.685492  223217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:33:38.685508  223217 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:33:38.685813  223217 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:33:38.686002  223217 main.go:141] libmachine: (multinode-959371) Calling .GetState
	I0817 21:33:38.686088  223217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:33:38.686117  223217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:33:38.688056  223217 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0817 21:33:38.688077  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:38.688089  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:38 GMT
	I0817 21:33:38.688098  223217 round_trippers.go:580]     Audit-Id: abba5cad-8e66-4286-9f2d-5eff2b68e30f
	I0817 21:33:38.688106  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:38.688112  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:38.688120  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:38.688125  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:38.688130  223217 round_trippers.go:580]     Content-Length: 291
	I0817 21:33:38.688173  223217 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15a0cefe-1964-4cfb-951e-96eec4cbbba6","resourceVersion":"334","creationTimestamp":"2023-08-17T21:33:26Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0817 21:33:38.688306  223217 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:33:38.688633  223217 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15a0cefe-1964-4cfb-951e-96eec4cbbba6","resourceVersion":"334","creationTimestamp":"2023-08-17T21:33:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0817 21:33:38.688692  223217 round_trippers.go:463] PUT https://192.168.39.104:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:33:38.688636  223217 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:33:38.688703  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:38.688768  223217 round_trippers.go:473]     Content-Type: application/json
	I0817 21:33:38.688793  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:38.688804  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:38.688998  223217 round_trippers.go:463] GET https://192.168.39.104:8443/apis/storage.k8s.io/v1/storageclasses
	I0817 21:33:38.689015  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:38.689026  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:38.689036  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:38.695364  223217 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0817 21:33:38.695385  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:38.695392  223217 round_trippers.go:580]     Content-Length: 109
	I0817 21:33:38.695398  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:38 GMT
	I0817 21:33:38.695403  223217 round_trippers.go:580]     Audit-Id: c05073aa-adfb-4940-a64d-552a15a6dc06
	I0817 21:33:38.695409  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:38.695414  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:38.695419  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:38.695425  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:38.695443  223217 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"349"},"items":[]}
	I0817 21:33:38.695746  223217 addons.go:231] Setting addon default-storageclass=true in "multinode-959371"
	I0817 21:33:38.695794  223217 host.go:66] Checking if "multinode-959371" exists ...
	I0817 21:33:38.696105  223217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:33:38.696133  223217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:33:38.701286  223217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41939
	I0817 21:33:38.701701  223217 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:33:38.702263  223217 main.go:141] libmachine: Using API Version  1
	I0817 21:33:38.702293  223217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:33:38.702623  223217 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:33:38.702858  223217 main.go:141] libmachine: (multinode-959371) Calling .GetState
	I0817 21:33:38.704541  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:33:38.706536  223217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:33:38.708108  223217 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:33:38.708129  223217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:33:38.708150  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:38.711428  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:38.711835  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:38.711867  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:38.711954  223217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39699
	I0817 21:33:38.712153  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:38.712331  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:38.712366  223217 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:33:38.712508  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:38.712669  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:33:38.712882  223217 main.go:141] libmachine: Using API Version  1
	I0817 21:33:38.712903  223217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:33:38.713262  223217 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:33:38.713722  223217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:33:38.713772  223217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:33:38.718842  223217 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0817 21:33:38.718865  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:38.718875  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:38.718885  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:38.718893  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:38.718906  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:38.718916  223217 round_trippers.go:580]     Content-Length: 291
	I0817 21:33:38.718926  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:38 GMT
	I0817 21:33:38.718939  223217 round_trippers.go:580]     Audit-Id: c65056e6-c606-4006-b969-9d2ccfcbe913
	I0817 21:33:38.718973  223217 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15a0cefe-1964-4cfb-951e-96eec4cbbba6","resourceVersion":"350","creationTimestamp":"2023-08-17T21:33:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0817 21:33:38.719162  223217 round_trippers.go:463] GET https://192.168.39.104:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:33:38.719196  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:38.719208  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:38.719218  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:38.723978  223217 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:33:38.724002  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:38.724013  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:38.724025  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:38.724035  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:38.724047  223217 round_trippers.go:580]     Content-Length: 291
	I0817 21:33:38.724064  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:38 GMT
	I0817 21:33:38.724076  223217 round_trippers.go:580]     Audit-Id: 0d5a9829-b1a2-41f8-a770-b9b712b62b04
	I0817 21:33:38.724086  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:38.724117  223217 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15a0cefe-1964-4cfb-951e-96eec4cbbba6","resourceVersion":"350","creationTimestamp":"2023-08-17T21:33:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0817 21:33:38.724235  223217 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-959371" context rescaled to 1 replicas
	I0817 21:33:38.724271  223217 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:33:38.726385  223217 out.go:177] * Verifying Kubernetes components...
	I0817 21:33:38.728349  223217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:33:38.728687  223217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0817 21:33:38.729213  223217 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:33:38.729710  223217 main.go:141] libmachine: Using API Version  1
	I0817 21:33:38.729729  223217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:33:38.730026  223217 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:33:38.730204  223217 main.go:141] libmachine: (multinode-959371) Calling .GetState
	I0817 21:33:38.731831  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:33:38.732072  223217 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 21:33:38.732087  223217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 21:33:38.732105  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:33:38.735016  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:38.735431  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:33:38.735466  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:33:38.735637  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:33:38.735824  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:33:38.735994  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:33:38.736110  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:33:38.867957  223217 command_runner.go:130] > apiVersion: v1
	I0817 21:33:38.867981  223217 command_runner.go:130] > data:
	I0817 21:33:38.867985  223217 command_runner.go:130] >   Corefile: |
	I0817 21:33:38.867989  223217 command_runner.go:130] >     .:53 {
	I0817 21:33:38.867993  223217 command_runner.go:130] >         errors
	I0817 21:33:38.867998  223217 command_runner.go:130] >         health {
	I0817 21:33:38.868003  223217 command_runner.go:130] >            lameduck 5s
	I0817 21:33:38.868006  223217 command_runner.go:130] >         }
	I0817 21:33:38.868010  223217 command_runner.go:130] >         ready
	I0817 21:33:38.868019  223217 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0817 21:33:38.868025  223217 command_runner.go:130] >            pods insecure
	I0817 21:33:38.868033  223217 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0817 21:33:38.868044  223217 command_runner.go:130] >            ttl 30
	I0817 21:33:38.868052  223217 command_runner.go:130] >         }
	I0817 21:33:38.868060  223217 command_runner.go:130] >         prometheus :9153
	I0817 21:33:38.868068  223217 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0817 21:33:38.868079  223217 command_runner.go:130] >            max_concurrent 1000
	I0817 21:33:38.868084  223217 command_runner.go:130] >         }
	I0817 21:33:38.868088  223217 command_runner.go:130] >         cache 30
	I0817 21:33:38.868093  223217 command_runner.go:130] >         loop
	I0817 21:33:38.868099  223217 command_runner.go:130] >         reload
	I0817 21:33:38.868103  223217 command_runner.go:130] >         loadbalance
	I0817 21:33:38.868107  223217 command_runner.go:130] >     }
	I0817 21:33:38.868111  223217 command_runner.go:130] > kind: ConfigMap
	I0817 21:33:38.868115  223217 command_runner.go:130] > metadata:
	I0817 21:33:38.868123  223217 command_runner.go:130] >   creationTimestamp: "2023-08-17T21:33:26Z"
	I0817 21:33:38.868129  223217 command_runner.go:130] >   name: coredns
	I0817 21:33:38.868149  223217 command_runner.go:130] >   namespace: kube-system
	I0817 21:33:38.868161  223217 command_runner.go:130] >   resourceVersion: "259"
	I0817 21:33:38.868170  223217 command_runner.go:130] >   uid: e9226e04-c717-47b9-9786-67441c6d4d26
	I0817 21:33:38.869876  223217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 21:33:38.869970  223217 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:33:38.870271  223217 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:33:38.870632  223217 node_ready.go:35] waiting up to 6m0s for node "multinode-959371" to be "Ready" ...
	I0817 21:33:38.870717  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:38.870728  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:38.870740  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:38.870751  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:38.874313  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:38.874331  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:38.874338  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:38.874344  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:38.874350  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:38 GMT
	I0817 21:33:38.874356  223217 round_trippers.go:580]     Audit-Id: b80843c4-10ec-467a-b0b7-5e8e99b94f3f
	I0817 21:33:38.874361  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:38.874368  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:38.874552  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:38.875208  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:38.875223  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:38.875231  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:38.875238  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:38.883043  223217 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0817 21:33:38.883068  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:38.883076  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:38.883083  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:38.883088  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:38 GMT
	I0817 21:33:38.883094  223217 round_trippers.go:580]     Audit-Id: 575e8390-16c0-40ca-9ed8-0db89f702797
	I0817 21:33:38.883099  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:38.883104  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:38.885349  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:38.904525  223217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:33:38.953499  223217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 21:33:39.386095  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:39.386132  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:39.386145  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:39.386155  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:39.418845  223217 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0817 21:33:39.418873  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:39.418881  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:39 GMT
	I0817 21:33:39.418887  223217 round_trippers.go:580]     Audit-Id: feabe862-eb9c-400f-8620-38ec306822ca
	I0817 21:33:39.418893  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:39.418898  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:39.418904  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:39.418912  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:39.421508  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:39.852927  223217 command_runner.go:130] > configmap/coredns replaced
	I0817 21:33:39.864861  223217 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0817 21:33:39.886145  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:39.886174  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:39.886184  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:39.886191  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:39.889793  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:39.889824  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:39.889836  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:39.889854  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:39 GMT
	I0817 21:33:39.889863  223217 round_trippers.go:580]     Audit-Id: 3dfa6c5c-7bc5-4932-92c7-b06ba13954ad
	I0817 21:33:39.889872  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:39.889882  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:39.889890  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:39.890811  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:39.911582  223217 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0817 21:33:39.919811  223217 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0817 21:33:39.933688  223217 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0817 21:33:39.943659  223217 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0817 21:33:39.952925  223217 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0817 21:33:39.968328  223217 command_runner.go:130] > pod/storage-provisioner created
	I0817 21:33:39.971042  223217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.066470173s)
	I0817 21:33:39.971098  223217 main.go:141] libmachine: Making call to close driver server
	I0817 21:33:39.971113  223217 main.go:141] libmachine: (multinode-959371) Calling .Close
	I0817 21:33:39.971052  223217 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0817 21:33:39.971190  223217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017654516s)
	I0817 21:33:39.971236  223217 main.go:141] libmachine: Making call to close driver server
	I0817 21:33:39.971254  223217 main.go:141] libmachine: (multinode-959371) Calling .Close
	I0817 21:33:39.971440  223217 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:33:39.971460  223217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:33:39.971477  223217 main.go:141] libmachine: Making call to close driver server
	I0817 21:33:39.971487  223217 main.go:141] libmachine: (multinode-959371) Calling .Close
	I0817 21:33:39.971548  223217 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:33:39.971566  223217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:33:39.971580  223217 main.go:141] libmachine: Making call to close driver server
	I0817 21:33:39.971594  223217 main.go:141] libmachine: (multinode-959371) Calling .Close
	I0817 21:33:39.971620  223217 main.go:141] libmachine: (multinode-959371) DBG | Closing plugin on server side
	I0817 21:33:39.971712  223217 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:33:39.971727  223217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:33:39.971742  223217 main.go:141] libmachine: (multinode-959371) DBG | Closing plugin on server side
	I0817 21:33:39.971892  223217 main.go:141] libmachine: (multinode-959371) DBG | Closing plugin on server side
	I0817 21:33:39.971959  223217 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:33:39.971974  223217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:33:39.971984  223217 main.go:141] libmachine: Making call to close driver server
	I0817 21:33:39.971995  223217 main.go:141] libmachine: (multinode-959371) Calling .Close
	I0817 21:33:39.972292  223217 main.go:141] libmachine: (multinode-959371) DBG | Closing plugin on server side
	I0817 21:33:39.972308  223217 main.go:141] libmachine: Successfully made call to close driver server
	I0817 21:33:39.972335  223217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 21:33:39.974513  223217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 21:33:39.976160  223217 addons.go:502] enable addons completed in 1.308627049s: enabled=[storage-provisioner default-storageclass]
	I0817 21:33:40.386389  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:40.386415  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:40.386424  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:40.386430  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:40.389426  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:40.389460  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:40.389470  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:40.389479  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:40.389487  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:40.389496  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:40.389504  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:40 GMT
	I0817 21:33:40.389513  223217 round_trippers.go:580]     Audit-Id: 1c0b830b-0265-4b34-a687-504e3edecc37
	I0817 21:33:40.389644  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:40.886370  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:40.886397  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:40.886406  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:40.886412  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:40.889609  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:40.889644  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:40.889654  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:40.889662  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:40.889669  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:40.889677  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:40 GMT
	I0817 21:33:40.889685  223217 round_trippers.go:580]     Audit-Id: c32972f8-078b-4418-90c5-dcc7d56cc739
	I0817 21:33:40.889692  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:40.889910  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:40.890284  223217 node_ready.go:58] node "multinode-959371" has status "Ready":"False"
	I0817 21:33:41.386665  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:41.386691  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:41.386699  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:41.386706  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:41.389728  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:41.389758  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:41.389769  223217 round_trippers.go:580]     Audit-Id: 4f2aebc8-1ecf-43e2-abb5-1f608909fe9a
	I0817 21:33:41.389778  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:41.389787  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:41.389796  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:41.389805  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:41.389813  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:41 GMT
	I0817 21:33:41.389994  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:41.886732  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:41.886758  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:41.886770  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:41.886778  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:41.889643  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:41.889672  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:41.889684  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:41.889693  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:41.889701  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:41.889708  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:41 GMT
	I0817 21:33:41.889716  223217 round_trippers.go:580]     Audit-Id: 9793f0f5-c6cf-4dbb-8feb-942cb3bf9101
	I0817 21:33:41.889725  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:41.889943  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:42.386576  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:42.386604  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:42.386612  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:42.386618  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:42.389548  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:42.389570  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:42.389578  223217 round_trippers.go:580]     Audit-Id: 465d0336-3294-48b3-afa5-ce85b1f61d10
	I0817 21:33:42.389584  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:42.389589  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:42.389594  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:42.389600  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:42.389605  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:42 GMT
	I0817 21:33:42.389788  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:42.886531  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:42.886575  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:42.886584  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:42.886590  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:42.889393  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:42.889414  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:42.889422  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:42.889430  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:42.889439  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:42.889447  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:42 GMT
	I0817 21:33:42.889454  223217 round_trippers.go:580]     Audit-Id: 5ec09698-94bb-4d43-b1d7-9b1b32271ad7
	I0817 21:33:42.889459  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:42.889981  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:42.890382  223217 node_ready.go:58] node "multinode-959371" has status "Ready":"False"
	I0817 21:33:43.386727  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:43.386753  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:43.386761  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:43.386767  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:43.390597  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:43.390623  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:43.390633  223217 round_trippers.go:580]     Audit-Id: 8e605295-f8c9-40db-ac9b-251cc81cc869
	I0817 21:33:43.390641  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:43.390649  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:43.390656  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:43.390664  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:43.390671  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:43 GMT
	I0817 21:33:43.390914  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:43.886174  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:43.886203  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:43.886215  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:43.886224  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:43.889328  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:43.889354  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:43.889362  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:43.889368  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:43 GMT
	I0817 21:33:43.889374  223217 round_trippers.go:580]     Audit-Id: 0dc5b91c-14d6-4805-970d-4cf320a38f5c
	I0817 21:33:43.889379  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:43.889384  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:43.889392  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:43.889529  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:44.386222  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:44.386251  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:44.386262  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:44.386271  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:44.389179  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:44.389200  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:44.389208  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:44.389213  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:44.389219  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:44.389224  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:44.389229  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:44 GMT
	I0817 21:33:44.389235  223217 round_trippers.go:580]     Audit-Id: 1b05bc69-967d-40dd-9988-6e16b5778beb
	I0817 21:33:44.389351  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"348","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0817 21:33:44.885970  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:44.885994  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:44.886005  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:44.886013  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:44.896230  223217 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0817 21:33:44.896255  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:44.896263  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:44.896269  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:44.896274  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:44.896280  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:44 GMT
	I0817 21:33:44.896290  223217 round_trippers.go:580]     Audit-Id: c887c017-4518-4a38-9a8a-4263ee66a5c9
	I0817 21:33:44.896296  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:44.897231  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:44.897685  223217 node_ready.go:49] node "multinode-959371" has status "Ready":"True"
	I0817 21:33:44.897708  223217 node_ready.go:38] duration metric: took 6.027056784s waiting for node "multinode-959371" to be "Ready" ...
	I0817 21:33:44.897719  223217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:33:44.897819  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:33:44.897829  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:44.897844  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:44.897854  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:44.912778  223217 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0817 21:33:44.912804  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:44.912815  223217 round_trippers.go:580]     Audit-Id: 2b7a8bcf-e89c-4ffe-b551-99a96a2a79d2
	I0817 21:33:44.912823  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:44.912830  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:44.912837  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:44.912845  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:44.912852  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:44 GMT
	I0817 21:33:44.915699  223217 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"429","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54594 chars]
	I0817 21:33:44.918764  223217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:44.918856  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:33:44.918867  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:44.918876  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:44.918882  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:44.923686  223217 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:33:44.923706  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:44.923716  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:44.923724  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:44.923731  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:44.923739  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:44 GMT
	I0817 21:33:44.923747  223217 round_trippers.go:580]     Audit-Id: f648910f-64e8-4d76-9e21-5e9727645097
	I0817 21:33:44.923756  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:44.924516  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"429","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0817 21:33:44.924969  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:44.924986  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:44.924996  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:44.925005  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:44.929448  223217 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:33:44.929466  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:44.929473  223217 round_trippers.go:580]     Audit-Id: 080e00ab-035d-4b33-98ef-91d1a206ee43
	I0817 21:33:44.929478  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:44.929490  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:44.929498  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:44.929506  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:44.929514  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:44 GMT
	I0817 21:33:44.930288  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:44.930709  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:33:44.930724  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:44.930734  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:44.930740  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:44.933624  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:44.933648  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:44.933658  223217 round_trippers.go:580]     Audit-Id: 32906f29-52cb-417c-9d66-a0a056b2977c
	I0817 21:33:44.933666  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:44.933673  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:44.933681  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:44.933689  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:44.933709  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:44 GMT
	I0817 21:33:44.934335  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"429","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0817 21:33:44.934848  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:44.934863  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:44.934871  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:44.934881  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:44.938129  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:44.938148  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:44.938158  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:44 GMT
	I0817 21:33:44.938166  223217 round_trippers.go:580]     Audit-Id: 1ea05755-a5f8-4fb7-a2b9-3e88dc7b13a1
	I0817 21:33:44.938174  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:44.938183  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:44.938192  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:44.938203  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:44.938987  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:45.440271  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:33:45.440295  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:45.440305  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:45.440312  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:45.443438  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:45.443471  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:45.443482  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:45.443490  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:45.443498  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:45 GMT
	I0817 21:33:45.443506  223217 round_trippers.go:580]     Audit-Id: 904ee209-90c7-44e7-91a1-2ba73082d20d
	I0817 21:33:45.443514  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:45.443522  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:45.443665  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"429","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0817 21:33:45.444304  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:45.444321  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:45.444329  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:45.444335  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:45.446708  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:45.446737  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:45.446746  223217 round_trippers.go:580]     Audit-Id: 48091bcd-d06a-4571-b12d-c8a24c953ca0
	I0817 21:33:45.446755  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:45.446762  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:45.446771  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:45.446789  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:45.446797  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:45 GMT
	I0817 21:33:45.447121  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:45.939842  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:33:45.939877  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:45.939890  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:45.939900  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:45.942911  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:45.942939  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:45.942947  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:45.942953  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:45 GMT
	I0817 21:33:45.942958  223217 round_trippers.go:580]     Audit-Id: 3f556e9c-b459-4764-8ff5-24e8a0711c41
	I0817 21:33:45.942965  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:45.942976  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:45.942985  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:45.943213  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"429","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0817 21:33:45.943712  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:45.943729  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:45.943740  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:45.943748  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:45.946549  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:45.946565  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:45.946572  223217 round_trippers.go:580]     Audit-Id: 381fd613-c2b6-4cdc-b3f7-c1dc72d39720
	I0817 21:33:45.946580  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:45.946589  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:45.946599  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:45.946609  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:45.946620  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:45 GMT
	I0817 21:33:45.946949  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:46.439592  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:33:46.439621  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.439633  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.439642  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.443210  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:46.443246  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.443257  223217 round_trippers.go:580]     Audit-Id: d51e7a2e-60bc-489c-a3c9-c96ff2de9d95
	I0817 21:33:46.443266  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.443276  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.443285  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.443299  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.443304  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.444034  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"429","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0817 21:33:46.444618  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:46.444636  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.444648  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.444658  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.447192  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:46.447216  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.447226  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.447233  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.447241  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.447255  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.447264  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.447274  223217 round_trippers.go:580]     Audit-Id: f35f3e98-7ba3-4e27-add2-72ed7b297c0b
	I0817 21:33:46.447527  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:46.940249  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:33:46.940279  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.940288  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.940294  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.944220  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:46.944247  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.944255  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.944261  223217 round_trippers.go:580]     Audit-Id: a565ae20-66ee-451d-887f-49317c90c4d6
	I0817 21:33:46.944266  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.944272  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.944278  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.944283  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.945135  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"447","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0817 21:33:46.945604  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:46.945615  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.945622  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.945629  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.947958  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:46.947975  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.947982  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.947987  223217 round_trippers.go:580]     Audit-Id: cce7361e-20b5-422d-a52f-e6111ecdee70
	I0817 21:33:46.947993  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.947998  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.948003  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.948011  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.948156  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:46.948471  223217 pod_ready.go:92] pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace has status "Ready":"True"
	I0817 21:33:46.948487  223217 pod_ready.go:81] duration metric: took 2.029699468s waiting for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:46.948495  223217 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:46.948545  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-959371
	I0817 21:33:46.948554  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.948561  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.948567  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.951186  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:46.951205  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.951212  223217 round_trippers.go:580]     Audit-Id: b9ffcf0d-20bb-45e9-b679-6f16d7639123
	I0817 21:33:46.951218  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.951224  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.951229  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.951236  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.951244  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.951394  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-959371","namespace":"kube-system","uid":"0ffe6db5-4285-4788-88b2-073753ece5f3","resourceVersion":"441","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.104:2379","kubernetes.io/config.hash":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.mirror":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.seen":"2023-08-17T21:33:26.519088298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0817 21:33:46.951755  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:46.951766  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.951775  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.951781  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.954215  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:46.954238  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.954247  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.954255  223217 round_trippers.go:580]     Audit-Id: 24a24ed2-c0cb-4339-96d0-d9750d35166c
	I0817 21:33:46.954262  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.954269  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.954277  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.954285  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.954481  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:46.954770  223217 pod_ready.go:92] pod "etcd-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:33:46.954784  223217 pod_ready.go:81] duration metric: took 6.28377ms waiting for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:46.954796  223217 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:46.954847  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-959371
	I0817 21:33:46.954855  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.954863  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.954869  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.957350  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:46.957371  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.957380  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.957389  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.957397  223217 round_trippers.go:580]     Audit-Id: c3437524-c01a-4503-8898-0489b10800f9
	I0817 21:33:46.957404  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.957411  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.957419  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.957537  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-959371","namespace":"kube-system","uid":"0efb1ae7-705a-47df-91c6-0d9390b68983","resourceVersion":"442","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.104:8443","kubernetes.io/config.hash":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.mirror":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.seen":"2023-08-17T21:33:26.519082064Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0817 21:33:46.957972  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:46.957985  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.957992  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.958000  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.960209  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:46.960226  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.960233  223217 round_trippers.go:580]     Audit-Id: 42cf8489-656d-4aa0-a91d-6380bf1e78bb
	I0817 21:33:46.960242  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.960250  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.960260  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.960273  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.960286  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.960397  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:46.960675  223217 pod_ready.go:92] pod "kube-apiserver-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:33:46.960689  223217 pod_ready.go:81] duration metric: took 5.886876ms waiting for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:46.960700  223217 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:46.960742  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:33:46.960749  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.960756  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.960761  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.963140  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:46.963154  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.963160  223217 round_trippers.go:580]     Audit-Id: f1c7530c-e8a6-479b-9c91-4e92452ce6c2
	I0817 21:33:46.963166  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.963171  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.963181  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.963186  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.963192  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.963408  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"443","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0817 21:33:46.963846  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:46.963861  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:46.963868  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:46.963876  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:46.965949  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:46.965963  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:46.965969  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:46.965975  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:46.965981  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:46.965989  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:46.965998  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:46 GMT
	I0817 21:33:46.966006  223217 round_trippers.go:580]     Audit-Id: d53b5b59-f052-4b90-8bc2-a5f740e23f70
	I0817 21:33:46.966805  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:46.967083  223217 pod_ready.go:92] pod "kube-controller-manager-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:33:46.967096  223217 pod_ready.go:81] duration metric: took 6.390773ms waiting for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:46.967106  223217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:47.086509  223217 request.go:628] Waited for 119.338283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:33:47.086606  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:33:47.086613  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:47.086625  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:47.086636  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:47.090135  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:47.090159  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:47.090169  223217 round_trippers.go:580]     Audit-Id: c429e442-093e-4532-8b71-d61be3060f77
	I0817 21:33:47.090177  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:47.090185  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:47.090194  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:47.090203  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:47.090210  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:47 GMT
	I0817 21:33:47.090339  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gdf7","generateName":"kube-proxy-","namespace":"kube-system","uid":"00e6f433-51d6-49bb-a927-780720361eb3","resourceVersion":"413","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0817 21:33:47.286146  223217 request.go:628] Waited for 195.363382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:47.286216  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:47.286221  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:47.286229  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:47.286235  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:47.289088  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:47.289119  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:47.289129  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:47.289138  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:47 GMT
	I0817 21:33:47.289146  223217 round_trippers.go:580]     Audit-Id: de74a90a-c168-4720-8853-ffeae9596cfc
	I0817 21:33:47.289154  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:47.289162  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:47.289170  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:47.289344  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:47.289675  223217 pod_ready.go:92] pod "kube-proxy-8gdf7" in "kube-system" namespace has status "Ready":"True"
	I0817 21:33:47.289688  223217 pod_ready.go:81] duration metric: took 322.577185ms waiting for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:47.289697  223217 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:47.486141  223217 request.go:628] Waited for 196.347135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:33:47.486209  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:33:47.486214  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:47.486223  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:47.486230  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:47.489152  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:47.489174  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:47.489180  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:47 GMT
	I0817 21:33:47.489186  223217 round_trippers.go:580]     Audit-Id: acb7b0fb-aab9-4d1c-b0c3-ff898475dbba
	I0817 21:33:47.489192  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:47.489197  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:47.489202  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:47.489208  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:47.489373  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-959371","namespace":"kube-system","uid":"a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2","resourceVersion":"349","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.mirror":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.seen":"2023-08-17T21:33:26.519087461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0817 21:33:47.686106  223217 request.go:628] Waited for 196.298668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:47.686188  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:33:47.686207  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:47.686223  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:47.686235  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:47.689106  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:33:47.689133  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:47.689143  223217 round_trippers.go:580]     Audit-Id: a8a6fbd1-6228-48af-ba59-9c8d1d8d72e4
	I0817 21:33:47.689152  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:47.689161  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:47.689169  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:47.689175  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:47.689180  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:47 GMT
	I0817 21:33:47.689284  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:33:47.689599  223217 pod_ready.go:92] pod "kube-scheduler-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:33:47.689612  223217 pod_ready.go:81] duration metric: took 399.908985ms waiting for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:33:47.689623  223217 pod_ready.go:38] duration metric: took 2.791887424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:33:47.689639  223217 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:33:47.689692  223217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:33:47.703138  223217 command_runner.go:130] > 1068
	I0817 21:33:47.703182  223217 api_server.go:72] duration metric: took 8.978878225s to wait for apiserver process to appear ...
	I0817 21:33:47.703194  223217 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:33:47.703216  223217 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0817 21:33:47.708379  223217 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0817 21:33:47.708451  223217 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I0817 21:33:47.708459  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:47.708470  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:47.708482  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:47.709525  223217 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:33:47.709542  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:47.709549  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:47.709554  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:47.709560  223217 round_trippers.go:580]     Content-Length: 263
	I0817 21:33:47.709565  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:47 GMT
	I0817 21:33:47.709570  223217 round_trippers.go:580]     Audit-Id: 4312ce82-9280-4a14-b3fb-01b79fc3ccc9
	I0817 21:33:47.709577  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:47.709582  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:47.709601  223217 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0817 21:33:47.709688  223217 api_server.go:141] control plane version: v1.27.4
	I0817 21:33:47.709705  223217 api_server.go:131] duration metric: took 6.50427ms to wait for apiserver health ...
	I0817 21:33:47.709713  223217 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:33:47.886123  223217 request.go:628] Waited for 176.290219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:33:47.886194  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:33:47.886199  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:47.886206  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:47.886213  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:47.889785  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:47.889810  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:47.889818  223217 round_trippers.go:580]     Audit-Id: 52d0e25d-5f0a-4e97-8171-c3819f529ec9
	I0817 21:33:47.889824  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:47.889830  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:47.889835  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:47.889841  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:47.889846  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:47 GMT
	I0817 21:33:47.891263  223217 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"447","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0817 21:33:47.893861  223217 system_pods.go:59] 8 kube-system pods found
	I0817 21:33:47.893893  223217 system_pods.go:61] "coredns-5d78c9869d-87rlb" [52da85e0-72f0-4919-8615-d1cb46b65ca4] Running
	I0817 21:33:47.893900  223217 system_pods.go:61] "etcd-multinode-959371" [0ffe6db5-4285-4788-88b2-073753ece5f3] Running
	I0817 21:33:47.893907  223217 system_pods.go:61] "kindnet-s7l7j" [6af177c8-cc30-4a86-98d8-443cef5036d8] Running
	I0817 21:33:47.893913  223217 system_pods.go:61] "kube-apiserver-multinode-959371" [0efb1ae7-705a-47df-91c6-0d9390b68983] Running
	I0817 21:33:47.893920  223217 system_pods.go:61] "kube-controller-manager-multinode-959371" [00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f] Running
	I0817 21:33:47.893925  223217 system_pods.go:61] "kube-proxy-8gdf7" [00e6f433-51d6-49bb-a927-780720361eb3] Running
	I0817 21:33:47.893931  223217 system_pods.go:61] "kube-scheduler-multinode-959371" [a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2] Running
	I0817 21:33:47.893938  223217 system_pods.go:61] "storage-provisioner" [e8aa1192-3588-49da-be88-15a801d006fc] Running
	I0817 21:33:47.893948  223217 system_pods.go:74] duration metric: took 184.228433ms to wait for pod list to return data ...
	I0817 21:33:47.893968  223217 default_sa.go:34] waiting for default service account to be created ...
	I0817 21:33:48.086530  223217 request.go:628] Waited for 192.458718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I0817 21:33:48.086593  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I0817 21:33:48.086598  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:48.086605  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:48.086612  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:48.089998  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:48.090026  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:48.090036  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:48.090044  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:48.090068  223217 round_trippers.go:580]     Content-Length: 261
	I0817 21:33:48.090075  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:48 GMT
	I0817 21:33:48.090083  223217 round_trippers.go:580]     Audit-Id: 0d748422-d289-4fde-828b-3c7c1ad1dbcc
	I0817 21:33:48.090092  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:48.090101  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:48.090131  223217 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c7ddc132-1c40-459d-89b5-903ee7cd5edc","resourceVersion":"343","creationTimestamp":"2023-08-17T21:33:38Z"}}]}
	I0817 21:33:48.090353  223217 default_sa.go:45] found service account: "default"
	I0817 21:33:48.090371  223217 default_sa.go:55] duration metric: took 196.39635ms for default service account to be created ...
	I0817 21:33:48.090381  223217 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 21:33:48.286843  223217 request.go:628] Waited for 196.38189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:33:48.286923  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:33:48.286932  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:48.286940  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:48.286946  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:48.291042  223217 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:33:48.291073  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:48.291084  223217 round_trippers.go:580]     Audit-Id: 08d6023e-78d2-450d-92bc-ffb756b354ba
	I0817 21:33:48.291093  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:48.291100  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:48.291106  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:48.291114  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:48.291122  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:48 GMT
	I0817 21:33:48.292550  223217 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"447","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0817 21:33:48.294283  223217 system_pods.go:86] 8 kube-system pods found
	I0817 21:33:48.294359  223217 system_pods.go:89] "coredns-5d78c9869d-87rlb" [52da85e0-72f0-4919-8615-d1cb46b65ca4] Running
	I0817 21:33:48.294379  223217 system_pods.go:89] "etcd-multinode-959371" [0ffe6db5-4285-4788-88b2-073753ece5f3] Running
	I0817 21:33:48.294388  223217 system_pods.go:89] "kindnet-s7l7j" [6af177c8-cc30-4a86-98d8-443cef5036d8] Running
	I0817 21:33:48.294397  223217 system_pods.go:89] "kube-apiserver-multinode-959371" [0efb1ae7-705a-47df-91c6-0d9390b68983] Running
	I0817 21:33:48.294407  223217 system_pods.go:89] "kube-controller-manager-multinode-959371" [00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f] Running
	I0817 21:33:48.294418  223217 system_pods.go:89] "kube-proxy-8gdf7" [00e6f433-51d6-49bb-a927-780720361eb3] Running
	I0817 21:33:48.294426  223217 system_pods.go:89] "kube-scheduler-multinode-959371" [a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2] Running
	I0817 21:33:48.294434  223217 system_pods.go:89] "storage-provisioner" [e8aa1192-3588-49da-be88-15a801d006fc] Running
	I0817 21:33:48.294443  223217 system_pods.go:126] duration metric: took 204.057055ms to wait for k8s-apps to be running ...
	I0817 21:33:48.294456  223217 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:33:48.294509  223217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:33:48.309790  223217 system_svc.go:56] duration metric: took 15.319533ms WaitForService to wait for kubelet.
	I0817 21:33:48.309825  223217 kubeadm.go:581] duration metric: took 9.585522301s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:33:48.309847  223217 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:33:48.486322  223217 request.go:628] Waited for 176.378862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I0817 21:33:48.486402  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I0817 21:33:48.486407  223217 round_trippers.go:469] Request Headers:
	I0817 21:33:48.486415  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:33:48.486421  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:33:48.489626  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:33:48.489656  223217 round_trippers.go:577] Response Headers:
	I0817 21:33:48.489666  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:33:48.489674  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:33:48 GMT
	I0817 21:33:48.489682  223217 round_trippers.go:580]     Audit-Id: 0bbcea68-f0fc-4330-a02b-6f1f81a93835
	I0817 21:33:48.489689  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:33:48.489697  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:33:48.489705  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:33:48.489820  223217 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0817 21:33:48.490242  223217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:33:48.490267  223217 node_conditions.go:123] node cpu capacity is 2
	I0817 21:33:48.490280  223217 node_conditions.go:105] duration metric: took 180.428171ms to run NodePressure ...
	I0817 21:33:48.490291  223217 start.go:228] waiting for startup goroutines ...
	I0817 21:33:48.490300  223217 start.go:233] waiting for cluster config update ...
	I0817 21:33:48.490310  223217 start.go:242] writing updated cluster config ...
	I0817 21:33:48.492793  223217 out.go:177] 
	I0817 21:33:48.494442  223217 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:33:48.494525  223217 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:33:48.496479  223217 out.go:177] * Starting worker node multinode-959371-m02 in cluster multinode-959371
	I0817 21:33:48.497927  223217 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:33:48.497958  223217 cache.go:57] Caching tarball of preloaded images
	I0817 21:33:48.498105  223217 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 21:33:48.498119  223217 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:33:48.498232  223217 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:33:48.498416  223217 start.go:365] acquiring machines lock for multinode-959371-m02: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 21:33:48.498471  223217 start.go:369] acquired machines lock for "multinode-959371-m02" in 32.672µs
	I0817 21:33:48.498496  223217 start.go:93] Provisioning new machine with config: &{Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterNam
e:multinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:33:48.498587  223217 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0817 21:33:48.500360  223217 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0817 21:33:48.500457  223217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:33:48.500503  223217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:33:48.515260  223217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35399
	I0817 21:33:48.515796  223217 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:33:48.516346  223217 main.go:141] libmachine: Using API Version  1
	I0817 21:33:48.516368  223217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:33:48.516694  223217 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:33:48.516953  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetMachineName
	I0817 21:33:48.517113  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:33:48.517284  223217 start.go:159] libmachine.API.Create for "multinode-959371" (driver="kvm2")
	I0817 21:33:48.517322  223217 client.go:168] LocalClient.Create starting
	I0817 21:33:48.517362  223217 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem
	I0817 21:33:48.517406  223217 main.go:141] libmachine: Decoding PEM data...
	I0817 21:33:48.517432  223217 main.go:141] libmachine: Parsing certificate...
	I0817 21:33:48.517510  223217 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem
	I0817 21:33:48.517539  223217 main.go:141] libmachine: Decoding PEM data...
	I0817 21:33:48.517558  223217 main.go:141] libmachine: Parsing certificate...
	I0817 21:33:48.517582  223217 main.go:141] libmachine: Running pre-create checks...
	I0817 21:33:48.517595  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .PreCreateCheck
	I0817 21:33:48.517759  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetConfigRaw
	I0817 21:33:48.518175  223217 main.go:141] libmachine: Creating machine...
	I0817 21:33:48.518194  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .Create
	I0817 21:33:48.518326  223217 main.go:141] libmachine: (multinode-959371-m02) Creating KVM machine...
	I0817 21:33:48.519477  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found existing default KVM network
	I0817 21:33:48.519643  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found existing private KVM network mk-multinode-959371
	I0817 21:33:48.519750  223217 main.go:141] libmachine: (multinode-959371-m02) Setting up store path in /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02 ...
	I0817 21:33:48.519777  223217 main.go:141] libmachine: (multinode-959371-m02) Building disk image from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0817 21:33:48.519840  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:48.519734  223591 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:33:48.519954  223217 main.go:141] libmachine: (multinode-959371-m02) Downloading /home/jenkins/minikube-integration/16865-203458/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0817 21:33:48.747858  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:48.747722  223591 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa...
	I0817 21:33:48.976602  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:48.976436  223591 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/multinode-959371-m02.rawdisk...
	I0817 21:33:48.976631  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Writing magic tar header
	I0817 21:33:48.976644  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Writing SSH key tar header
	I0817 21:33:48.976653  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:48.976555  223591 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02 ...
	I0817 21:33:48.976709  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02
	I0817 21:33:48.976781  223217 main.go:141] libmachine: (multinode-959371-m02) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02 (perms=drwx------)
	I0817 21:33:48.976807  223217 main.go:141] libmachine: (multinode-959371-m02) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines (perms=drwxr-xr-x)
	I0817 21:33:48.976816  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines
	I0817 21:33:48.976828  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:33:48.976836  223217 main.go:141] libmachine: (multinode-959371-m02) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube (perms=drwxr-xr-x)
	I0817 21:33:48.976844  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458
	I0817 21:33:48.976853  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0817 21:33:48.976862  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Checking permissions on dir: /home/jenkins
	I0817 21:33:48.976872  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Checking permissions on dir: /home
	I0817 21:33:48.976880  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Skipping /home - not owner
	I0817 21:33:48.976917  223217 main.go:141] libmachine: (multinode-959371-m02) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458 (perms=drwxrwxr-x)
	I0817 21:33:48.976947  223217 main.go:141] libmachine: (multinode-959371-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0817 21:33:48.976963  223217 main.go:141] libmachine: (multinode-959371-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0817 21:33:48.976976  223217 main.go:141] libmachine: (multinode-959371-m02) Creating domain...
	I0817 21:33:48.978037  223217 main.go:141] libmachine: (multinode-959371-m02) define libvirt domain using xml: 
	I0817 21:33:48.978080  223217 main.go:141] libmachine: (multinode-959371-m02) <domain type='kvm'>
	I0817 21:33:48.978093  223217 main.go:141] libmachine: (multinode-959371-m02)   <name>multinode-959371-m02</name>
	I0817 21:33:48.978106  223217 main.go:141] libmachine: (multinode-959371-m02)   <memory unit='MiB'>2200</memory>
	I0817 21:33:48.978116  223217 main.go:141] libmachine: (multinode-959371-m02)   <vcpu>2</vcpu>
	I0817 21:33:48.978124  223217 main.go:141] libmachine: (multinode-959371-m02)   <features>
	I0817 21:33:48.978157  223217 main.go:141] libmachine: (multinode-959371-m02)     <acpi/>
	I0817 21:33:48.978194  223217 main.go:141] libmachine: (multinode-959371-m02)     <apic/>
	I0817 21:33:48.978204  223217 main.go:141] libmachine: (multinode-959371-m02)     <pae/>
	I0817 21:33:48.978216  223217 main.go:141] libmachine: (multinode-959371-m02)     
	I0817 21:33:48.978228  223217 main.go:141] libmachine: (multinode-959371-m02)   </features>
	I0817 21:33:48.978240  223217 main.go:141] libmachine: (multinode-959371-m02)   <cpu mode='host-passthrough'>
	I0817 21:33:48.978253  223217 main.go:141] libmachine: (multinode-959371-m02)   
	I0817 21:33:48.978265  223217 main.go:141] libmachine: (multinode-959371-m02)   </cpu>
	I0817 21:33:48.978275  223217 main.go:141] libmachine: (multinode-959371-m02)   <os>
	I0817 21:33:48.978286  223217 main.go:141] libmachine: (multinode-959371-m02)     <type>hvm</type>
	I0817 21:33:48.978297  223217 main.go:141] libmachine: (multinode-959371-m02)     <boot dev='cdrom'/>
	I0817 21:33:48.978305  223217 main.go:141] libmachine: (multinode-959371-m02)     <boot dev='hd'/>
	I0817 21:33:48.978318  223217 main.go:141] libmachine: (multinode-959371-m02)     <bootmenu enable='no'/>
	I0817 21:33:48.978329  223217 main.go:141] libmachine: (multinode-959371-m02)   </os>
	I0817 21:33:48.978359  223217 main.go:141] libmachine: (multinode-959371-m02)   <devices>
	I0817 21:33:48.978384  223217 main.go:141] libmachine: (multinode-959371-m02)     <disk type='file' device='cdrom'>
	I0817 21:33:48.978406  223217 main.go:141] libmachine: (multinode-959371-m02)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/boot2docker.iso'/>
	I0817 21:33:48.978421  223217 main.go:141] libmachine: (multinode-959371-m02)       <target dev='hdc' bus='scsi'/>
	I0817 21:33:48.978436  223217 main.go:141] libmachine: (multinode-959371-m02)       <readonly/>
	I0817 21:33:48.978448  223217 main.go:141] libmachine: (multinode-959371-m02)     </disk>
	I0817 21:33:48.978467  223217 main.go:141] libmachine: (multinode-959371-m02)     <disk type='file' device='disk'>
	I0817 21:33:48.978483  223217 main.go:141] libmachine: (multinode-959371-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0817 21:33:48.978505  223217 main.go:141] libmachine: (multinode-959371-m02)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/multinode-959371-m02.rawdisk'/>
	I0817 21:33:48.978516  223217 main.go:141] libmachine: (multinode-959371-m02)       <target dev='hda' bus='virtio'/>
	I0817 21:33:48.978523  223217 main.go:141] libmachine: (multinode-959371-m02)     </disk>
	I0817 21:33:48.978531  223217 main.go:141] libmachine: (multinode-959371-m02)     <interface type='network'>
	I0817 21:33:48.978540  223217 main.go:141] libmachine: (multinode-959371-m02)       <source network='mk-multinode-959371'/>
	I0817 21:33:48.978548  223217 main.go:141] libmachine: (multinode-959371-m02)       <model type='virtio'/>
	I0817 21:33:48.978565  223217 main.go:141] libmachine: (multinode-959371-m02)     </interface>
	I0817 21:33:48.978578  223217 main.go:141] libmachine: (multinode-959371-m02)     <interface type='network'>
	I0817 21:33:48.978596  223217 main.go:141] libmachine: (multinode-959371-m02)       <source network='default'/>
	I0817 21:33:48.978615  223217 main.go:141] libmachine: (multinode-959371-m02)       <model type='virtio'/>
	I0817 21:33:48.978630  223217 main.go:141] libmachine: (multinode-959371-m02)     </interface>
	I0817 21:33:48.978643  223217 main.go:141] libmachine: (multinode-959371-m02)     <serial type='pty'>
	I0817 21:33:48.978658  223217 main.go:141] libmachine: (multinode-959371-m02)       <target port='0'/>
	I0817 21:33:48.978671  223217 main.go:141] libmachine: (multinode-959371-m02)     </serial>
	I0817 21:33:48.978699  223217 main.go:141] libmachine: (multinode-959371-m02)     <console type='pty'>
	I0817 21:33:48.978711  223217 main.go:141] libmachine: (multinode-959371-m02)       <target type='serial' port='0'/>
	I0817 21:33:48.978726  223217 main.go:141] libmachine: (multinode-959371-m02)     </console>
	I0817 21:33:48.978734  223217 main.go:141] libmachine: (multinode-959371-m02)     <rng model='virtio'>
	I0817 21:33:48.978740  223217 main.go:141] libmachine: (multinode-959371-m02)       <backend model='random'>/dev/random</backend>
	I0817 21:33:48.978751  223217 main.go:141] libmachine: (multinode-959371-m02)     </rng>
	I0817 21:33:48.978764  223217 main.go:141] libmachine: (multinode-959371-m02)     
	I0817 21:33:48.978774  223217 main.go:141] libmachine: (multinode-959371-m02)     
	I0817 21:33:48.978780  223217 main.go:141] libmachine: (multinode-959371-m02)   </devices>
	I0817 21:33:48.978787  223217 main.go:141] libmachine: (multinode-959371-m02) </domain>
	I0817 21:33:48.978798  223217 main.go:141] libmachine: (multinode-959371-m02) 
	I0817 21:33:48.986386  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:df:b6:0d in network default
	I0817 21:33:48.987034  223217 main.go:141] libmachine: (multinode-959371-m02) Ensuring networks are active...
	I0817 21:33:48.987058  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:48.987860  223217 main.go:141] libmachine: (multinode-959371-m02) Ensuring network default is active
	I0817 21:33:48.988237  223217 main.go:141] libmachine: (multinode-959371-m02) Ensuring network mk-multinode-959371 is active
	I0817 21:33:48.988558  223217 main.go:141] libmachine: (multinode-959371-m02) Getting domain xml...
	I0817 21:33:48.989337  223217 main.go:141] libmachine: (multinode-959371-m02) Creating domain...
	I0817 21:33:50.259813  223217 main.go:141] libmachine: (multinode-959371-m02) Waiting to get IP...
	I0817 21:33:50.260610  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:50.261055  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:50.261088  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:50.261037  223591 retry.go:31] will retry after 252.24038ms: waiting for machine to come up
	I0817 21:33:50.514557  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:50.515041  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:50.515076  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:50.514966  223591 retry.go:31] will retry after 350.93897ms: waiting for machine to come up
	I0817 21:33:50.867532  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:50.867996  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:50.868054  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:50.867945  223591 retry.go:31] will retry after 362.021215ms: waiting for machine to come up
	I0817 21:33:51.231549  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:51.232036  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:51.232068  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:51.231971  223591 retry.go:31] will retry after 515.701987ms: waiting for machine to come up
	I0817 21:33:51.750126  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:51.750662  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:51.750694  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:51.750607  223591 retry.go:31] will retry after 626.072448ms: waiting for machine to come up
	I0817 21:33:52.378596  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:52.379023  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:52.379098  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:52.378985  223591 retry.go:31] will retry after 935.675273ms: waiting for machine to come up
	I0817 21:33:53.316085  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:53.316634  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:53.316672  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:53.316554  223591 retry.go:31] will retry after 1.018469993s: waiting for machine to come up
	I0817 21:33:54.336507  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:54.336915  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:54.336944  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:54.336865  223591 retry.go:31] will retry after 1.112458226s: waiting for machine to come up
	I0817 21:33:55.451186  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:55.451621  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:55.451656  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:55.451558  223591 retry.go:31] will retry after 1.379345491s: waiting for machine to come up
	I0817 21:33:56.833211  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:56.833631  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:56.833663  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:56.833581  223591 retry.go:31] will retry after 1.437493815s: waiting for machine to come up
	I0817 21:33:58.272846  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:33:58.273349  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:33:58.273377  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:33:58.273293  223591 retry.go:31] will retry after 2.869038106s: waiting for machine to come up
	I0817 21:34:01.144407  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:01.145024  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:34:01.145056  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:34:01.144923  223591 retry.go:31] will retry after 2.643319246s: waiting for machine to come up
	I0817 21:34:03.789323  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:03.789773  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:34:03.789792  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:34:03.789734  223591 retry.go:31] will retry after 3.906067253s: waiting for machine to come up
	I0817 21:34:07.700813  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:07.701274  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find current IP address of domain multinode-959371-m02 in network mk-multinode-959371
	I0817 21:34:07.701316  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | I0817 21:34:07.701226  223591 retry.go:31] will retry after 5.536663166s: waiting for machine to come up
	I0817 21:34:13.243107  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.243593  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has current primary IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.243617  223217 main.go:141] libmachine: (multinode-959371-m02) Found IP for machine: 192.168.39.175
	I0817 21:34:13.243632  223217 main.go:141] libmachine: (multinode-959371-m02) Reserving static IP address...
	I0817 21:34:13.244099  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | unable to find host DHCP lease matching {name: "multinode-959371-m02", mac: "52:54:00:c1:00:c7", ip: "192.168.39.175"} in network mk-multinode-959371
	I0817 21:34:13.321861  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Getting to WaitForSSH function...
	I0817 21:34:13.321898  223217 main.go:141] libmachine: (multinode-959371-m02) Reserved static IP address: 192.168.39.175
	I0817 21:34:13.321914  223217 main.go:141] libmachine: (multinode-959371-m02) Waiting for SSH to be available...
	I0817 21:34:13.324874  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.325334  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:13.325378  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.325469  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Using SSH client type: external
	I0817 21:34:13.325500  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa (-rw-------)
	I0817 21:34:13.325529  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 21:34:13.325553  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | About to run SSH command:
	I0817 21:34:13.325596  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | exit 0
	I0817 21:34:13.417981  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | SSH cmd err, output: <nil>: 
	I0817 21:34:13.418315  223217 main.go:141] libmachine: (multinode-959371-m02) KVM machine creation complete!
	I0817 21:34:13.418714  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetConfigRaw
	I0817 21:34:13.419304  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:34:13.419496  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:34:13.419663  223217 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0817 21:34:13.419675  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetState
	I0817 21:34:13.420917  223217 main.go:141] libmachine: Detecting operating system of created instance...
	I0817 21:34:13.420936  223217 main.go:141] libmachine: Waiting for SSH to be available...
	I0817 21:34:13.420945  223217 main.go:141] libmachine: Getting to WaitForSSH function...
	I0817 21:34:13.420955  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:13.423621  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.424023  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:13.424052  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.424175  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:13.424398  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:13.424564  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:13.424746  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:13.424895  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:34:13.425347  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:34:13.425363  223217 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0817 21:34:13.550198  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:34:13.550228  223217 main.go:141] libmachine: Detecting the provisioner...
	I0817 21:34:13.550237  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:13.553066  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.553485  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:13.553521  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.553664  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:13.553895  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:13.554077  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:13.554270  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:13.554432  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:34:13.554869  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:34:13.554883  223217 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0817 21:34:13.679245  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0817 21:34:13.679322  223217 main.go:141] libmachine: found compatible host: buildroot
	I0817 21:34:13.679335  223217 main.go:141] libmachine: Provisioning with buildroot...
	I0817 21:34:13.679344  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetMachineName
	I0817 21:34:13.679655  223217 buildroot.go:166] provisioning hostname "multinode-959371-m02"
	I0817 21:34:13.679688  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetMachineName
	I0817 21:34:13.679880  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:13.682659  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.683163  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:13.683195  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.683366  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:13.683575  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:13.683775  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:13.683973  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:13.684189  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:34:13.684595  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:34:13.684612  223217 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-959371-m02 && echo "multinode-959371-m02" | sudo tee /etc/hostname
	I0817 21:34:13.824050  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-959371-m02
	
	I0817 21:34:13.824091  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:13.826801  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.827248  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:13.827283  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.827416  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:13.827614  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:13.827855  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:13.828048  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:13.828305  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:34:13.828696  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:34:13.828716  223217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-959371-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-959371-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-959371-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:34:13.964805  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:34:13.964854  223217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 21:34:13.964876  223217 buildroot.go:174] setting up certificates
	I0817 21:34:13.964889  223217 provision.go:83] configureAuth start
	I0817 21:34:13.964904  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetMachineName
	I0817 21:34:13.965233  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetIP
	I0817 21:34:13.967641  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.968114  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:13.968157  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.968326  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:13.970368  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.970747  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:13.970783  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:13.970907  223217 provision.go:138] copyHostCerts
	I0817 21:34:13.970945  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:34:13.970982  223217 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 21:34:13.970993  223217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:34:13.971080  223217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 21:34:13.971174  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:34:13.971199  223217 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 21:34:13.971209  223217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:34:13.971245  223217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 21:34:13.971329  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:34:13.971360  223217 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 21:34:13.971369  223217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:34:13.971408  223217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 21:34:13.971476  223217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.multinode-959371-m02 san=[192.168.39.175 192.168.39.175 localhost 127.0.0.1 minikube multinode-959371-m02]
	I0817 21:34:14.083190  223217 provision.go:172] copyRemoteCerts
	I0817 21:34:14.083261  223217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:34:14.083296  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:14.085829  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.086181  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:14.086214  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.086402  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:14.086613  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:14.086783  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:14.086918  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa Username:docker}
	I0817 21:34:14.179703  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:34:14.179774  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 21:34:14.206607  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:34:14.206683  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0817 21:34:14.233785  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:34:14.233875  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:34:14.258519  223217 provision.go:86] duration metric: configureAuth took 293.614299ms
	I0817 21:34:14.258550  223217 buildroot.go:189] setting minikube options for container-runtime
	I0817 21:34:14.258763  223217 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:34:14.258847  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:14.261956  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.262362  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:14.262402  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.262585  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:14.262800  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:14.262978  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:14.263089  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:14.263280  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:34:14.263701  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:34:14.263717  223217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:34:14.594684  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:34:14.594722  223217 main.go:141] libmachine: Checking connection to Docker...
	I0817 21:34:14.594743  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetURL
	I0817 21:34:14.596062  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | Using libvirt version 6000000
	I0817 21:34:14.598565  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.598935  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:14.598984  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.599136  223217 main.go:141] libmachine: Docker is up and running!
	I0817 21:34:14.599165  223217 main.go:141] libmachine: Reticulating splines...
	I0817 21:34:14.599180  223217 client.go:171] LocalClient.Create took 26.08183934s
	I0817 21:34:14.599201  223217 start.go:167] duration metric: libmachine.API.Create for "multinode-959371" took 26.081920457s
	I0817 21:34:14.599212  223217 start.go:300] post-start starting for "multinode-959371-m02" (driver="kvm2")
	I0817 21:34:14.599221  223217 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:34:14.599240  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:34:14.599487  223217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:34:14.599520  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:14.601843  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.602234  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:14.602268  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.602406  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:14.602593  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:14.602770  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:14.602880  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa Username:docker}
	I0817 21:34:14.695648  223217 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:34:14.699955  223217 command_runner.go:130] > NAME=Buildroot
	I0817 21:34:14.699977  223217 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0817 21:34:14.699981  223217 command_runner.go:130] > ID=buildroot
	I0817 21:34:14.699986  223217 command_runner.go:130] > VERSION_ID=2021.02.12
	I0817 21:34:14.699990  223217 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0817 21:34:14.700052  223217 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 21:34:14.700078  223217 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 21:34:14.700155  223217 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 21:34:14.700228  223217 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 21:34:14.700238  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /etc/ssl/certs/2106702.pem
	I0817 21:34:14.700318  223217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:34:14.708882  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:34:14.732614  223217 start.go:303] post-start completed in 133.385424ms
	I0817 21:34:14.732705  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetConfigRaw
	I0817 21:34:14.733283  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetIP
	I0817 21:34:14.735845  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.736161  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:14.736199  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.736451  223217 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:34:14.736630  223217 start.go:128] duration metric: createHost completed in 26.238033189s
	I0817 21:34:14.736655  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:14.738736  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.739078  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:14.739107  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.739226  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:14.739405  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:14.739591  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:14.739747  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:14.739926  223217 main.go:141] libmachine: Using SSH client type: native
	I0817 21:34:14.740412  223217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:34:14.740428  223217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 21:34:14.867370  223217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692308054.851002184
	
	I0817 21:34:14.867399  223217 fix.go:206] guest clock: 1692308054.851002184
	I0817 21:34:14.867408  223217 fix.go:219] Guest: 2023-08-17 21:34:14.851002184 +0000 UTC Remote: 2023-08-17 21:34:14.73664281 +0000 UTC m=+94.528455385 (delta=114.359374ms)
	I0817 21:34:14.867425  223217 fix.go:190] guest clock delta is within tolerance: 114.359374ms
	I0817 21:34:14.867430  223217 start.go:83] releasing machines lock for "multinode-959371-m02", held for 26.368947392s
	I0817 21:34:14.867456  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:34:14.867733  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetIP
	I0817 21:34:14.870624  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.871011  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:14.871049  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.873712  223217 out.go:177] * Found network options:
	I0817 21:34:14.875508  223217 out.go:177]   - NO_PROXY=192.168.39.104
	W0817 21:34:14.877004  223217 proxy.go:119] fail to check proxy env: Error ip not in block
	I0817 21:34:14.877065  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:34:14.877718  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:34:14.877919  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:34:14.878024  223217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:34:14.878085  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	W0817 21:34:14.878150  223217 proxy.go:119] fail to check proxy env: Error ip not in block
	I0817 21:34:14.878236  223217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:34:14.878267  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:34:14.880924  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.881056  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.881329  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:14.881364  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.881570  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:14.881667  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:14.881696  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:14.881739  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:14.881849  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:34:14.881919  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:14.881983  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:34:14.882063  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa Username:docker}
	I0817 21:34:14.882119  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:34:14.882254  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa Username:docker}
	I0817 21:34:15.008536  223217 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0817 21:34:15.146493  223217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:34:15.152309  223217 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0817 21:34:15.152364  223217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 21:34:15.152422  223217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:34:15.168845  223217 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0817 21:34:15.168882  223217 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 21:34:15.168889  223217 start.go:466] detecting cgroup driver to use...
	I0817 21:34:15.168954  223217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:34:15.184378  223217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:34:15.198086  223217 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:34:15.198175  223217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:34:15.212075  223217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:34:15.226642  223217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:34:15.242555  223217 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0817 21:34:15.345977  223217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:34:15.361121  223217 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0817 21:34:15.468188  223217 docker.go:212] disabling docker service ...
	I0817 21:34:15.468376  223217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:34:15.483256  223217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:34:15.496403  223217 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0817 21:34:15.496521  223217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:34:15.607060  223217 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0817 21:34:15.607152  223217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:34:15.621139  223217 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0817 21:34:15.621467  223217 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0817 21:34:15.717633  223217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:34:15.731702  223217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:34:15.749341  223217 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0817 21:34:15.749384  223217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 21:34:15.749441  223217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:34:15.759503  223217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:34:15.759570  223217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:34:15.769760  223217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:34:15.780010  223217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:34:15.790605  223217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:34:15.801484  223217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:34:15.811061  223217 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 21:34:15.811110  223217 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 21:34:15.811155  223217 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 21:34:15.824818  223217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:34:15.834947  223217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:34:15.944377  223217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:34:16.122956  223217 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:34:16.123053  223217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:34:16.128167  223217 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0817 21:34:16.128200  223217 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0817 21:34:16.128211  223217 command_runner.go:130] > Device: 16h/22d	Inode: 708         Links: 1
	I0817 21:34:16.128222  223217 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:34:16.128229  223217 command_runner.go:130] > Access: 2023-08-17 21:34:16.095752362 +0000
	I0817 21:34:16.128252  223217 command_runner.go:130] > Modify: 2023-08-17 21:34:16.095752362 +0000
	I0817 21:34:16.128261  223217 command_runner.go:130] > Change: 2023-08-17 21:34:16.095752362 +0000
	I0817 21:34:16.128269  223217 command_runner.go:130] >  Birth: -
	I0817 21:34:16.128427  223217 start.go:534] Will wait 60s for crictl version
	I0817 21:34:16.128495  223217 ssh_runner.go:195] Run: which crictl
	I0817 21:34:16.136402  223217 command_runner.go:130] > /usr/bin/crictl
	I0817 21:34:16.136503  223217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:34:16.168681  223217 command_runner.go:130] > Version:  0.1.0
	I0817 21:34:16.168706  223217 command_runner.go:130] > RuntimeName:  cri-o
	I0817 21:34:16.168713  223217 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0817 21:34:16.168723  223217 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0817 21:34:16.170287  223217 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 21:34:16.170359  223217 ssh_runner.go:195] Run: crio --version
	I0817 21:34:16.218634  223217 command_runner.go:130] > crio version 1.24.1
	I0817 21:34:16.218665  223217 command_runner.go:130] > Version:          1.24.1
	I0817 21:34:16.218675  223217 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:34:16.218681  223217 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:34:16.218689  223217 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:34:16.218695  223217 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:34:16.218701  223217 command_runner.go:130] > Compiler:         gc
	I0817 21:34:16.218708  223217 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:34:16.218716  223217 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:34:16.218735  223217 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:34:16.218746  223217 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:34:16.218752  223217 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:34:16.220089  223217 ssh_runner.go:195] Run: crio --version
	I0817 21:34:16.268324  223217 command_runner.go:130] > crio version 1.24.1
	I0817 21:34:16.268350  223217 command_runner.go:130] > Version:          1.24.1
	I0817 21:34:16.268358  223217 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:34:16.268363  223217 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:34:16.268371  223217 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:34:16.268379  223217 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:34:16.268386  223217 command_runner.go:130] > Compiler:         gc
	I0817 21:34:16.268394  223217 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:34:16.268403  223217 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:34:16.268414  223217 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:34:16.268418  223217 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:34:16.268423  223217 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:34:16.271887  223217 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 21:34:16.273479  223217 out.go:177]   - env NO_PROXY=192.168.39.104
	I0817 21:34:16.274984  223217 main.go:141] libmachine: (multinode-959371-m02) Calling .GetIP
	I0817 21:34:16.277662  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:16.277996  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:34:16.278021  223217 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:34:16.278347  223217 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 21:34:16.282732  223217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:34:16.296623  223217 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371 for IP: 192.168.39.175
	I0817 21:34:16.296670  223217 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:34:16.296933  223217 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 21:34:16.297026  223217 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 21:34:16.297049  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:34:16.297070  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:34:16.297086  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:34:16.297104  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:34:16.297184  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 21:34:16.297224  223217 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 21:34:16.297239  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:34:16.297272  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 21:34:16.297307  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:34:16.297342  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 21:34:16.297416  223217 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:34:16.297455  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:34:16.297475  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem -> /usr/share/ca-certificates/210670.pem
	I0817 21:34:16.297493  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /usr/share/ca-certificates/2106702.pem
	I0817 21:34:16.297891  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:34:16.323644  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:34:16.348280  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:34:16.372445  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:34:16.397043  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:34:16.425108  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 21:34:16.452152  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 21:34:16.479253  223217 ssh_runner.go:195] Run: openssl version
	I0817 21:34:16.485303  223217 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0817 21:34:16.485407  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:34:16.496081  223217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:34:16.501101  223217 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:34:16.501158  223217 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:34:16.501201  223217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:34:16.506424  223217 command_runner.go:130] > b5213941
	I0817 21:34:16.506829  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:34:16.516398  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 21:34:16.525871  223217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 21:34:16.530518  223217 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:34:16.530732  223217 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:34:16.530793  223217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 21:34:16.536208  223217 command_runner.go:130] > 51391683
	I0817 21:34:16.536286  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 21:34:16.546676  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 21:34:16.557979  223217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 21:34:16.562942  223217 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:34:16.562973  223217 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:34:16.563018  223217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 21:34:16.568424  223217 command_runner.go:130] > 3ec20f2e
	I0817 21:34:16.568769  223217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:34:16.578718  223217 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:34:16.582796  223217 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:34:16.582834  223217 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:34:16.582918  223217 ssh_runner.go:195] Run: crio config
	I0817 21:34:16.645621  223217 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0817 21:34:16.645656  223217 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0817 21:34:16.645662  223217 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0817 21:34:16.645666  223217 command_runner.go:130] > #
	I0817 21:34:16.645674  223217 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0817 21:34:16.645679  223217 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0817 21:34:16.645694  223217 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0817 21:34:16.645712  223217 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0817 21:34:16.645718  223217 command_runner.go:130] > # reload'.
	I0817 21:34:16.645736  223217 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0817 21:34:16.645748  223217 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0817 21:34:16.645762  223217 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0817 21:34:16.645772  223217 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0817 21:34:16.645776  223217 command_runner.go:130] > [crio]
	I0817 21:34:16.645782  223217 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0817 21:34:16.645793  223217 command_runner.go:130] > # containers images, in this directory.
	I0817 21:34:16.645823  223217 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0817 21:34:16.645841  223217 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0817 21:34:16.645871  223217 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0817 21:34:16.645890  223217 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0817 21:34:16.645901  223217 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0817 21:34:16.645924  223217 command_runner.go:130] > storage_driver = "overlay"
	I0817 21:34:16.645936  223217 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0817 21:34:16.645944  223217 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0817 21:34:16.645951  223217 command_runner.go:130] > storage_option = [
	I0817 21:34:16.646286  223217 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0817 21:34:16.646474  223217 command_runner.go:130] > ]
	I0817 21:34:16.646492  223217 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0817 21:34:16.646504  223217 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0817 21:34:16.646840  223217 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0817 21:34:16.646859  223217 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0817 21:34:16.646865  223217 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0817 21:34:16.646870  223217 command_runner.go:130] > # always happen on a node reboot
	I0817 21:34:16.647358  223217 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0817 21:34:16.647368  223217 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0817 21:34:16.647375  223217 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0817 21:34:16.647388  223217 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0817 21:34:16.647811  223217 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0817 21:34:16.647822  223217 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0817 21:34:16.647829  223217 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0817 21:34:16.648279  223217 command_runner.go:130] > # internal_wipe = true
	I0817 21:34:16.648297  223217 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0817 21:34:16.648307  223217 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0817 21:34:16.648317  223217 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0817 21:34:16.649067  223217 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0817 21:34:16.649086  223217 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0817 21:34:16.649092  223217 command_runner.go:130] > [crio.api]
	I0817 21:34:16.649100  223217 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0817 21:34:16.649550  223217 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0817 21:34:16.649566  223217 command_runner.go:130] > # IP address on which the stream server will listen.
	I0817 21:34:16.649985  223217 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0817 21:34:16.650001  223217 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0817 21:34:16.650011  223217 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0817 21:34:16.650653  223217 command_runner.go:130] > # stream_port = "0"
	I0817 21:34:16.650663  223217 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0817 21:34:16.652565  223217 command_runner.go:130] > # stream_enable_tls = false
	I0817 21:34:16.652594  223217 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0817 21:34:16.652598  223217 command_runner.go:130] > # stream_idle_timeout = ""
	I0817 21:34:16.652606  223217 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0817 21:34:16.652616  223217 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0817 21:34:16.652625  223217 command_runner.go:130] > # minutes.
	I0817 21:34:16.652645  223217 command_runner.go:130] > # stream_tls_cert = ""
	I0817 21:34:16.652651  223217 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0817 21:34:16.652660  223217 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0817 21:34:16.652667  223217 command_runner.go:130] > # stream_tls_key = ""
	I0817 21:34:16.652673  223217 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0817 21:34:16.652681  223217 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0817 21:34:16.652688  223217 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0817 21:34:16.652692  223217 command_runner.go:130] > # stream_tls_ca = ""
	I0817 21:34:16.652708  223217 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:34:16.652724  223217 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0817 21:34:16.652737  223217 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:34:16.652748  223217 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0817 21:34:16.652763  223217 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0817 21:34:16.652771  223217 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0817 21:34:16.652775  223217 command_runner.go:130] > [crio.runtime]
	I0817 21:34:16.652783  223217 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0817 21:34:16.652791  223217 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0817 21:34:16.652795  223217 command_runner.go:130] > # "nofile=1024:2048"
	I0817 21:34:16.652805  223217 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0817 21:34:16.652815  223217 command_runner.go:130] > # default_ulimits = [
	I0817 21:34:16.652824  223217 command_runner.go:130] > # ]
	I0817 21:34:16.652838  223217 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0817 21:34:16.652845  223217 command_runner.go:130] > # no_pivot = false
	I0817 21:34:16.652851  223217 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0817 21:34:16.652860  223217 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0817 21:34:16.652867  223217 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0817 21:34:16.652873  223217 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0817 21:34:16.652881  223217 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0817 21:34:16.652895  223217 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:34:16.652907  223217 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0817 21:34:16.652917  223217 command_runner.go:130] > # Cgroup setting for conmon
	I0817 21:34:16.652929  223217 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0817 21:34:16.652939  223217 command_runner.go:130] > conmon_cgroup = "pod"
	I0817 21:34:16.652951  223217 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0817 21:34:16.652958  223217 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0817 21:34:16.652965  223217 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:34:16.652973  223217 command_runner.go:130] > conmon_env = [
	I0817 21:34:16.652984  223217 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0817 21:34:16.652993  223217 command_runner.go:130] > ]
	I0817 21:34:16.653002  223217 command_runner.go:130] > # Additional environment variables to set for all the
	I0817 21:34:16.653014  223217 command_runner.go:130] > # containers. These are overridden if set in the
	I0817 21:34:16.653027  223217 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0817 21:34:16.653037  223217 command_runner.go:130] > # default_env = [
	I0817 21:34:16.653044  223217 command_runner.go:130] > # ]
	I0817 21:34:16.653052  223217 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0817 21:34:16.653061  223217 command_runner.go:130] > # selinux = false
	I0817 21:34:16.653075  223217 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0817 21:34:16.653089  223217 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0817 21:34:16.653102  223217 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0817 21:34:16.653112  223217 command_runner.go:130] > # seccomp_profile = ""
	I0817 21:34:16.653124  223217 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0817 21:34:16.653133  223217 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0817 21:34:16.653164  223217 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0817 21:34:16.653177  223217 command_runner.go:130] > # which might increase security.
	I0817 21:34:16.653185  223217 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0817 21:34:16.653196  223217 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0817 21:34:16.653209  223217 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0817 21:34:16.653219  223217 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0817 21:34:16.653233  223217 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0817 21:34:16.653245  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:34:16.653256  223217 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0817 21:34:16.653268  223217 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0817 21:34:16.653279  223217 command_runner.go:130] > # the cgroup blockio controller.
	I0817 21:34:16.653290  223217 command_runner.go:130] > # blockio_config_file = ""
	I0817 21:34:16.653301  223217 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0817 21:34:16.653310  223217 command_runner.go:130] > # irqbalance daemon.
	I0817 21:34:16.653322  223217 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0817 21:34:16.653337  223217 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0817 21:34:16.653348  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:34:16.653358  223217 command_runner.go:130] > # rdt_config_file = ""
	I0817 21:34:16.653371  223217 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0817 21:34:16.653380  223217 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0817 21:34:16.653389  223217 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0817 21:34:16.653398  223217 command_runner.go:130] > # separate_pull_cgroup = ""
	I0817 21:34:16.653413  223217 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0817 21:34:16.653427  223217 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0817 21:34:16.653436  223217 command_runner.go:130] > # will be added.
	I0817 21:34:16.653446  223217 command_runner.go:130] > # default_capabilities = [
	I0817 21:34:16.653455  223217 command_runner.go:130] > # 	"CHOWN",
	I0817 21:34:16.653463  223217 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0817 21:34:16.653471  223217 command_runner.go:130] > # 	"FSETID",
	I0817 21:34:16.653475  223217 command_runner.go:130] > # 	"FOWNER",
	I0817 21:34:16.653485  223217 command_runner.go:130] > # 	"SETGID",
	I0817 21:34:16.653495  223217 command_runner.go:130] > # 	"SETUID",
	I0817 21:34:16.653505  223217 command_runner.go:130] > # 	"SETPCAP",
	I0817 21:34:16.653514  223217 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0817 21:34:16.653523  223217 command_runner.go:130] > # 	"KILL",
	I0817 21:34:16.653532  223217 command_runner.go:130] > # ]
	I0817 21:34:16.653545  223217 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0817 21:34:16.653556  223217 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:34:16.653563  223217 command_runner.go:130] > # default_sysctls = [
	I0817 21:34:16.653568  223217 command_runner.go:130] > # ]
	I0817 21:34:16.653580  223217 command_runner.go:130] > # List of devices on the host that a
	I0817 21:34:16.653594  223217 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0817 21:34:16.653604  223217 command_runner.go:130] > # allowed_devices = [
	I0817 21:34:16.653613  223217 command_runner.go:130] > # 	"/dev/fuse",
	I0817 21:34:16.653621  223217 command_runner.go:130] > # ]
	I0817 21:34:16.653632  223217 command_runner.go:130] > # List of additional devices. specified as
	I0817 21:34:16.653644  223217 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0817 21:34:16.653651  223217 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0817 21:34:16.653677  223217 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:34:16.653688  223217 command_runner.go:130] > # additional_devices = [
	I0817 21:34:16.653697  223217 command_runner.go:130] > # ]
	I0817 21:34:16.653706  223217 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0817 21:34:16.653715  223217 command_runner.go:130] > # cdi_spec_dirs = [
	I0817 21:34:16.653730  223217 command_runner.go:130] > # 	"/etc/cdi",
	I0817 21:34:16.653736  223217 command_runner.go:130] > # 	"/var/run/cdi",
	I0817 21:34:16.653742  223217 command_runner.go:130] > # ]
	I0817 21:34:16.653755  223217 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0817 21:34:16.653772  223217 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0817 21:34:16.653781  223217 command_runner.go:130] > # Defaults to false.
	I0817 21:34:16.653793  223217 command_runner.go:130] > # device_ownership_from_security_context = false
	I0817 21:34:16.653807  223217 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0817 21:34:16.653816  223217 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0817 21:34:16.653820  223217 command_runner.go:130] > # hooks_dir = [
	I0817 21:34:16.653827  223217 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0817 21:34:16.653836  223217 command_runner.go:130] > # ]
	I0817 21:34:16.653848  223217 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0817 21:34:16.653862  223217 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0817 21:34:16.653874  223217 command_runner.go:130] > # its default mounts from the following two files:
	I0817 21:34:16.653883  223217 command_runner.go:130] > #
	I0817 21:34:16.653896  223217 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0817 21:34:16.653905  223217 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0817 21:34:16.653913  223217 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0817 21:34:16.653924  223217 command_runner.go:130] > #
	I0817 21:34:16.653936  223217 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0817 21:34:16.653951  223217 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0817 21:34:16.653965  223217 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0817 21:34:16.653976  223217 command_runner.go:130] > #      only add mounts it finds in this file.
	I0817 21:34:16.653984  223217 command_runner.go:130] > #
	I0817 21:34:16.653988  223217 command_runner.go:130] > # default_mounts_file = ""
	I0817 21:34:16.653998  223217 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0817 21:34:16.654013  223217 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0817 21:34:16.654023  223217 command_runner.go:130] > pids_limit = 1024
	I0817 21:34:16.654037  223217 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0817 21:34:16.654060  223217 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0817 21:34:16.654074  223217 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0817 21:34:16.654092  223217 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0817 21:34:16.654102  223217 command_runner.go:130] > # log_size_max = -1
	I0817 21:34:16.654116  223217 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0817 21:34:16.654126  223217 command_runner.go:130] > # log_to_journald = false
	I0817 21:34:16.654141  223217 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0817 21:34:16.654153  223217 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0817 21:34:16.654164  223217 command_runner.go:130] > # Path to directory for container attach sockets.
	I0817 21:34:16.654176  223217 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0817 21:34:16.654188  223217 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0817 21:34:16.654198  223217 command_runner.go:130] > # bind_mount_prefix = ""
	I0817 21:34:16.654210  223217 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0817 21:34:16.654219  223217 command_runner.go:130] > # read_only = false
	I0817 21:34:16.654232  223217 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0817 21:34:16.654245  223217 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0817 21:34:16.654256  223217 command_runner.go:130] > # live configuration reload.
	I0817 21:34:16.654263  223217 command_runner.go:130] > # log_level = "info"
	I0817 21:34:16.654276  223217 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0817 21:34:16.654288  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:34:16.654297  223217 command_runner.go:130] > # log_filter = ""
	I0817 21:34:16.654311  223217 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0817 21:34:16.654323  223217 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0817 21:34:16.654330  223217 command_runner.go:130] > # separated by comma.
	I0817 21:34:16.654337  223217 command_runner.go:130] > # uid_mappings = ""
	I0817 21:34:16.654350  223217 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0817 21:34:16.654364  223217 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0817 21:34:16.654373  223217 command_runner.go:130] > # separated by comma.
	I0817 21:34:16.654383  223217 command_runner.go:130] > # gid_mappings = ""
	I0817 21:34:16.654396  223217 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0817 21:34:16.654409  223217 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:34:16.654418  223217 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:34:16.654428  223217 command_runner.go:130] > # minimum_mappable_uid = -1
	I0817 21:34:16.654442  223217 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0817 21:34:16.654455  223217 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:34:16.654468  223217 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:34:16.654480  223217 command_runner.go:130] > # minimum_mappable_gid = -1
	I0817 21:34:16.654493  223217 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0817 21:34:16.654506  223217 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0817 21:34:16.654514  223217 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0817 21:34:16.654524  223217 command_runner.go:130] > # ctr_stop_timeout = 30
	I0817 21:34:16.654537  223217 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0817 21:34:16.654551  223217 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0817 21:34:16.654562  223217 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0817 21:34:16.654573  223217 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0817 21:34:16.654583  223217 command_runner.go:130] > drop_infra_ctr = false
	I0817 21:34:16.654597  223217 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0817 21:34:16.654609  223217 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0817 21:34:16.654625  223217 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0817 21:34:16.654635  223217 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0817 21:34:16.654646  223217 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0817 21:34:16.654657  223217 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0817 21:34:16.654668  223217 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0817 21:34:16.654683  223217 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0817 21:34:16.654694  223217 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0817 21:34:16.654705  223217 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0817 21:34:16.654715  223217 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0817 21:34:16.654734  223217 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0817 21:34:16.654744  223217 command_runner.go:130] > # default_runtime = "runc"
	I0817 21:34:16.654754  223217 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0817 21:34:16.654769  223217 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0817 21:34:16.654788  223217 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0817 21:34:16.654799  223217 command_runner.go:130] > # creation as a file is not desired either.
	I0817 21:34:16.654813  223217 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0817 21:34:16.654821  223217 command_runner.go:130] > # the hostname is being managed dynamically.
	I0817 21:34:16.654831  223217 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0817 21:34:16.654840  223217 command_runner.go:130] > # ]
	I0817 21:34:16.654851  223217 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0817 21:34:16.654865  223217 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0817 21:34:16.654880  223217 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0817 21:34:16.654894  223217 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0817 21:34:16.654903  223217 command_runner.go:130] > #
	I0817 21:34:16.654912  223217 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0817 21:34:16.654921  223217 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0817 21:34:16.654931  223217 command_runner.go:130] > #  runtime_type = "oci"
	I0817 21:34:16.654942  223217 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0817 21:34:16.654954  223217 command_runner.go:130] > #  privileged_without_host_devices = false
	I0817 21:34:16.654965  223217 command_runner.go:130] > #  allowed_annotations = []
	I0817 21:34:16.654974  223217 command_runner.go:130] > # Where:
	I0817 21:34:16.654986  223217 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0817 21:34:16.655000  223217 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0817 21:34:16.655015  223217 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0817 21:34:16.655028  223217 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0817 21:34:16.655038  223217 command_runner.go:130] > #   in $PATH.
	I0817 21:34:16.655049  223217 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0817 21:34:16.655061  223217 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0817 21:34:16.655074  223217 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0817 21:34:16.655083  223217 command_runner.go:130] > #   state.
	I0817 21:34:16.655097  223217 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0817 21:34:16.655107  223217 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0817 21:34:16.655125  223217 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0817 21:34:16.655135  223217 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0817 21:34:16.655149  223217 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0817 21:34:16.655163  223217 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0817 21:34:16.655174  223217 command_runner.go:130] > #   The currently recognized values are:
	I0817 21:34:16.655188  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0817 21:34:16.655197  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0817 21:34:16.655209  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0817 21:34:16.655222  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0817 21:34:16.655238  223217 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0817 21:34:16.655253  223217 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0817 21:34:16.655266  223217 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0817 21:34:16.655280  223217 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0817 21:34:16.655289  223217 command_runner.go:130] > #   should be moved to the container's cgroup
	I0817 21:34:16.655297  223217 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0817 21:34:16.655308  223217 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0817 21:34:16.655319  223217 command_runner.go:130] > runtime_type = "oci"
	I0817 21:34:16.655329  223217 command_runner.go:130] > runtime_root = "/run/runc"
	I0817 21:34:16.655340  223217 command_runner.go:130] > runtime_config_path = ""
	I0817 21:34:16.655350  223217 command_runner.go:130] > monitor_path = ""
	I0817 21:34:16.655357  223217 command_runner.go:130] > monitor_cgroup = ""
	I0817 21:34:16.655367  223217 command_runner.go:130] > monitor_exec_cgroup = ""
	I0817 21:34:16.655376  223217 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0817 21:34:16.655384  223217 command_runner.go:130] > # running containers
	I0817 21:34:16.655392  223217 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0817 21:34:16.655407  223217 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0817 21:34:16.655438  223217 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0817 21:34:16.655453  223217 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0817 21:34:16.655459  223217 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0817 21:34:16.655464  223217 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0817 21:34:16.655474  223217 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0817 21:34:16.655485  223217 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0817 21:34:16.655497  223217 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0817 21:34:16.655507  223217 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0817 21:34:16.655521  223217 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0817 21:34:16.655533  223217 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0817 21:34:16.655544  223217 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0817 21:34:16.655555  223217 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0817 21:34:16.655572  223217 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0817 21:34:16.655584  223217 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0817 21:34:16.655603  223217 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0817 21:34:16.655619  223217 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0817 21:34:16.655630  223217 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0817 21:34:16.655641  223217 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0817 21:34:16.655650  223217 command_runner.go:130] > # Example:
	I0817 21:34:16.655662  223217 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0817 21:34:16.655673  223217 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0817 21:34:16.655685  223217 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0817 21:34:16.655696  223217 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0817 21:34:16.655708  223217 command_runner.go:130] > # cpuset = 0
	I0817 21:34:16.655717  223217 command_runner.go:130] > # cpushares = "0-1"
	I0817 21:34:16.655727  223217 command_runner.go:130] > # Where:
	I0817 21:34:16.655738  223217 command_runner.go:130] > # The workload name is workload-type.
	I0817 21:34:16.655754  223217 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0817 21:34:16.655768  223217 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0817 21:34:16.655780  223217 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0817 21:34:16.655796  223217 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0817 21:34:16.655805  223217 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0817 21:34:16.655812  223217 command_runner.go:130] > # 
	I0817 21:34:16.655830  223217 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0817 21:34:16.655850  223217 command_runner.go:130] > #
	I0817 21:34:16.655863  223217 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0817 21:34:16.655876  223217 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0817 21:34:16.655887  223217 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0817 21:34:16.655897  223217 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0817 21:34:16.655910  223217 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0817 21:34:16.655920  223217 command_runner.go:130] > [crio.image]
	I0817 21:34:16.655933  223217 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0817 21:34:16.655943  223217 command_runner.go:130] > # default_transport = "docker://"
	I0817 21:34:16.655956  223217 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0817 21:34:16.655969  223217 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:34:16.655977  223217 command_runner.go:130] > # global_auth_file = ""
	I0817 21:34:16.655984  223217 command_runner.go:130] > # The image used to instantiate infra containers.
	I0817 21:34:16.655996  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:34:16.656008  223217 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0817 21:34:16.656023  223217 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0817 21:34:16.656036  223217 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:34:16.656047  223217 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:34:16.656056  223217 command_runner.go:130] > # pause_image_auth_file = ""
	I0817 21:34:16.656062  223217 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0817 21:34:16.656074  223217 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0817 21:34:16.656089  223217 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0817 21:34:16.656102  223217 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0817 21:34:16.656113  223217 command_runner.go:130] > # pause_command = "/pause"
	I0817 21:34:16.656126  223217 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0817 21:34:16.656139  223217 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0817 21:34:16.656148  223217 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0817 21:34:16.656161  223217 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0817 21:34:16.656174  223217 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0817 21:34:16.656184  223217 command_runner.go:130] > # signature_policy = ""
	I0817 21:34:16.656200  223217 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0817 21:34:16.656213  223217 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0817 21:34:16.656223  223217 command_runner.go:130] > # changing them here.
	I0817 21:34:16.656231  223217 command_runner.go:130] > # insecure_registries = [
	I0817 21:34:16.656238  223217 command_runner.go:130] > # ]
	I0817 21:34:16.656249  223217 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0817 21:34:16.656261  223217 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0817 21:34:16.656272  223217 command_runner.go:130] > # image_volumes = "mkdir"
	I0817 21:34:16.656283  223217 command_runner.go:130] > # Temporary directory to use for storing big files
	I0817 21:34:16.656293  223217 command_runner.go:130] > # big_files_temporary_dir = ""
	I0817 21:34:16.656306  223217 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0817 21:34:16.656316  223217 command_runner.go:130] > # CNI plugins.
	I0817 21:34:16.656322  223217 command_runner.go:130] > [crio.network]
	I0817 21:34:16.656336  223217 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0817 21:34:16.656348  223217 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0817 21:34:16.656358  223217 command_runner.go:130] > # cni_default_network = ""
	I0817 21:34:16.656371  223217 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0817 21:34:16.656382  223217 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0817 21:34:16.656392  223217 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0817 21:34:16.656402  223217 command_runner.go:130] > # plugin_dirs = [
	I0817 21:34:16.656411  223217 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0817 21:34:16.656420  223217 command_runner.go:130] > # ]
	I0817 21:34:16.656434  223217 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0817 21:34:16.656443  223217 command_runner.go:130] > [crio.metrics]
	I0817 21:34:16.656455  223217 command_runner.go:130] > # Globally enable or disable metrics support.
	I0817 21:34:16.656465  223217 command_runner.go:130] > enable_metrics = true
	I0817 21:34:16.656474  223217 command_runner.go:130] > # Specify enabled metrics collectors.
	I0817 21:34:16.656480  223217 command_runner.go:130] > # Per default all metrics are enabled.
	I0817 21:34:16.656491  223217 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0817 21:34:16.656505  223217 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0817 21:34:16.656519  223217 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0817 21:34:16.656529  223217 command_runner.go:130] > # metrics_collectors = [
	I0817 21:34:16.656538  223217 command_runner.go:130] > # 	"operations",
	I0817 21:34:16.656549  223217 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0817 21:34:16.656560  223217 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0817 21:34:16.656568  223217 command_runner.go:130] > # 	"operations_errors",
	I0817 21:34:16.656576  223217 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0817 21:34:16.656584  223217 command_runner.go:130] > # 	"image_pulls_by_name",
	I0817 21:34:16.656595  223217 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0817 21:34:16.656606  223217 command_runner.go:130] > # 	"image_pulls_failures",
	I0817 21:34:16.656616  223217 command_runner.go:130] > # 	"image_pulls_successes",
	I0817 21:34:16.656626  223217 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0817 21:34:16.656635  223217 command_runner.go:130] > # 	"image_layer_reuse",
	I0817 21:34:16.656645  223217 command_runner.go:130] > # 	"containers_oom_total",
	I0817 21:34:16.656654  223217 command_runner.go:130] > # 	"containers_oom",
	I0817 21:34:16.656659  223217 command_runner.go:130] > # 	"processes_defunct",
	I0817 21:34:16.656664  223217 command_runner.go:130] > # 	"operations_total",
	I0817 21:34:16.656674  223217 command_runner.go:130] > # 	"operations_latency_seconds",
	I0817 21:34:16.656686  223217 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0817 21:34:16.656697  223217 command_runner.go:130] > # 	"operations_errors_total",
	I0817 21:34:16.656707  223217 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0817 21:34:16.656718  223217 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0817 21:34:16.656732  223217 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0817 21:34:16.656742  223217 command_runner.go:130] > # 	"image_pulls_success_total",
	I0817 21:34:16.656751  223217 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0817 21:34:16.656758  223217 command_runner.go:130] > # 	"containers_oom_count_total",
	I0817 21:34:16.656764  223217 command_runner.go:130] > # ]
	I0817 21:34:16.656776  223217 command_runner.go:130] > # The port on which the metrics server will listen.
	I0817 21:34:16.656786  223217 command_runner.go:130] > # metrics_port = 9090
	I0817 21:34:16.656795  223217 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0817 21:34:16.656805  223217 command_runner.go:130] > # metrics_socket = ""
	I0817 21:34:16.656818  223217 command_runner.go:130] > # The certificate for the secure metrics server.
	I0817 21:34:16.656832  223217 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0817 21:34:16.656845  223217 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0817 21:34:16.656854  223217 command_runner.go:130] > # certificate on any modification event.
	I0817 21:34:16.656862  223217 command_runner.go:130] > # metrics_cert = ""
	I0817 21:34:16.656871  223217 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0817 21:34:16.656883  223217 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0817 21:34:16.656891  223217 command_runner.go:130] > # metrics_key = ""
	I0817 21:34:16.656904  223217 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0817 21:34:16.656913  223217 command_runner.go:130] > [crio.tracing]
	I0817 21:34:16.656925  223217 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0817 21:34:16.656936  223217 command_runner.go:130] > # enable_tracing = false
	I0817 21:34:16.656947  223217 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0817 21:34:16.656955  223217 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0817 21:34:16.656962  223217 command_runner.go:130] > # Number of samples to collect per million spans.
	I0817 21:34:16.656973  223217 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0817 21:34:16.656987  223217 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0817 21:34:16.656997  223217 command_runner.go:130] > [crio.stats]
	I0817 21:34:16.657010  223217 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0817 21:34:16.657023  223217 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0817 21:34:16.657033  223217 command_runner.go:130] > # stats_collection_period = 0
	I0817 21:34:16.657080  223217 command_runner.go:130] ! time="2023-08-17 21:34:16.630316496Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0817 21:34:16.657102  223217 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0817 21:34:16.657177  223217 cni.go:84] Creating CNI manager for ""
	I0817 21:34:16.657189  223217 cni.go:136] 2 nodes found, recommending kindnet
	I0817 21:34:16.657233  223217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:34:16.657266  223217 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.175 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-959371 NodeName:multinode-959371-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:34:16.657422  223217 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-959371-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:34:16.657496  223217 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-959371-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:34:16.657566  223217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:34:16.667527  223217 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.27.4': No such file or directory
	I0817 21:34:16.667581  223217 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.27.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.27.4': No such file or directory
	
	Initiating transfer...
	I0817 21:34:16.667651  223217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.27.4
	I0817 21:34:16.676421  223217 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubectl.sha256
	I0817 21:34:16.676456  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/linux/amd64/v1.27.4/kubectl -> /var/lib/minikube/binaries/v1.27.4/kubectl
	I0817 21:34:16.676542  223217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.4/kubectl
	I0817 21:34:16.676577  223217 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/linux/amd64/v1.27.4/kubelet
	I0817 21:34:16.676593  223217 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/linux/amd64/v1.27.4/kubeadm
	I0817 21:34:16.680635  223217 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.4/kubectl': No such file or directory
	I0817 21:34:16.680798  223217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.4/kubectl': No such file or directory
	I0817 21:34:16.680830  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/linux/amd64/v1.27.4/kubectl --> /var/lib/minikube/binaries/v1.27.4/kubectl (49262592 bytes)
	I0817 21:34:20.273430  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/linux/amd64/v1.27.4/kubeadm -> /var/lib/minikube/binaries/v1.27.4/kubeadm
	I0817 21:34:20.273518  223217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.4/kubeadm
	I0817 21:34:20.278411  223217 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.4/kubeadm': No such file or directory
	I0817 21:34:20.278453  223217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.4/kubeadm': No such file or directory
	I0817 21:34:20.278480  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/linux/amd64/v1.27.4/kubeadm --> /var/lib/minikube/binaries/v1.27.4/kubeadm (48164864 bytes)
	I0817 21:34:20.714634  223217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:34:20.728645  223217 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/linux/amd64/v1.27.4/kubelet -> /var/lib/minikube/binaries/v1.27.4/kubelet
	I0817 21:34:20.728742  223217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.4/kubelet
	I0817 21:34:20.733327  223217 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.4/kubelet': No such file or directory
	I0817 21:34:20.733371  223217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.4/kubelet': No such file or directory
	I0817 21:34:20.733402  223217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/linux/amd64/v1.27.4/kubelet --> /var/lib/minikube/binaries/v1.27.4/kubelet (106168320 bytes)
	I0817 21:34:21.252131  223217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0817 21:34:21.260963  223217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0817 21:34:21.278601  223217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:34:21.295679  223217 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0817 21:34:21.299839  223217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:34:21.312810  223217 host.go:66] Checking if "multinode-959371" exists ...
	I0817 21:34:21.313084  223217 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:34:21.313237  223217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:34:21.313308  223217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:34:21.328396  223217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0817 21:34:21.328926  223217 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:34:21.329471  223217 main.go:141] libmachine: Using API Version  1
	I0817 21:34:21.329500  223217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:34:21.329887  223217 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:34:21.330125  223217 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:34:21.330292  223217 start.go:301] JoinCluster: &{Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:34:21.330411  223217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0817 21:34:21.330428  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:34:21.333490  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:34:21.333966  223217 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:34:21.334006  223217 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:34:21.334190  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:34:21.334396  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:34:21.334585  223217 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:34:21.334727  223217 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:34:21.522801  223217 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token iqnnc6.24txgsfazxchwu24 --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 21:34:21.523016  223217 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:34:21.523123  223217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iqnnc6.24txgsfazxchwu24 --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-959371-m02"
	I0817 21:34:21.572869  223217 command_runner.go:130] > [preflight] Running pre-flight checks
	I0817 21:34:21.818961  223217 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0817 21:34:21.819001  223217 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0817 21:34:21.859157  223217 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:34:21.859192  223217 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:34:21.859200  223217 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0817 21:34:21.990538  223217 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0817 21:34:24.505651  223217 command_runner.go:130] > This node has joined the cluster:
	I0817 21:34:24.505686  223217 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0817 21:34:24.505696  223217 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0817 21:34:24.505706  223217 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0817 21:34:24.507844  223217 command_runner.go:130] ! W0817 21:34:21.564993     824 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0817 21:34:24.507875  223217 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:34:24.507903  223217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iqnnc6.24txgsfazxchwu24 --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-959371-m02": (2.984736119s)
	I0817 21:34:24.507929  223217 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0817 21:34:24.755753  223217 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0817 21:34:24.755792  223217 start.go:303] JoinCluster complete in 3.425501775s
	I0817 21:34:24.755805  223217 cni.go:84] Creating CNI manager for ""
	I0817 21:34:24.755813  223217 cni.go:136] 2 nodes found, recommending kindnet
	I0817 21:34:24.755868  223217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:34:24.765772  223217 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0817 21:34:24.765816  223217 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0817 21:34:24.765827  223217 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0817 21:34:24.765841  223217 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:34:24.765850  223217 command_runner.go:130] > Access: 2023-08-17 21:32:53.856618572 +0000
	I0817 21:34:24.765858  223217 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0817 21:34:24.765867  223217 command_runner.go:130] > Change: 2023-08-17 21:32:51.964618572 +0000
	I0817 21:34:24.765874  223217 command_runner.go:130] >  Birth: -
	I0817 21:34:24.765944  223217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:34:24.765960  223217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:34:24.787765  223217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:34:25.165755  223217 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:34:25.174014  223217 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:34:25.177563  223217 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0817 21:34:25.196616  223217 command_runner.go:130] > daemonset.apps/kindnet configured
	I0817 21:34:25.199890  223217 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:34:25.200144  223217 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:34:25.200489  223217 round_trippers.go:463] GET https://192.168.39.104:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:34:25.200499  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:25.200508  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:25.200514  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:25.203098  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:25.203116  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:25.203125  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:25.203133  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:25.203141  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:25.203150  223217 round_trippers.go:580]     Content-Length: 291
	I0817 21:34:25.203163  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:25 GMT
	I0817 21:34:25.203176  223217 round_trippers.go:580]     Audit-Id: 5ce8df2d-35e5-4361-be85-f894c3e7475f
	I0817 21:34:25.203191  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:25.203220  223217 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15a0cefe-1964-4cfb-951e-96eec4cbbba6","resourceVersion":"451","creationTimestamp":"2023-08-17T21:33:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0817 21:34:25.203320  223217 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-959371" context rescaled to 1 replicas
	I0817 21:34:25.203353  223217 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:34:25.206510  223217 out.go:177] * Verifying Kubernetes components...
	I0817 21:34:25.208048  223217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:34:25.222143  223217 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:34:25.222463  223217 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:34:25.222757  223217 node_ready.go:35] waiting up to 6m0s for node "multinode-959371-m02" to be "Ready" ...
	I0817 21:34:25.222838  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:25.222848  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:25.222860  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:25.222874  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:25.226387  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:25.226405  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:25.226412  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:25.226417  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:25.226423  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:25.226428  223217 round_trippers.go:580]     Content-Length: 3531
	I0817 21:34:25.226436  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:25 GMT
	I0817 21:34:25.226442  223217 round_trippers.go:580]     Audit-Id: 69464a57-8eeb-4a19-b4d6-65ac2bbe9676
	I0817 21:34:25.226450  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:25.226690  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"504","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0817 21:34:25.226968  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:25.226980  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:25.226991  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:25.227000  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:25.229483  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:25.229497  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:25.229503  223217 round_trippers.go:580]     Audit-Id: 64933572-4d0b-4ad3-9023-bd0e63e533f3
	I0817 21:34:25.229509  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:25.229519  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:25.229530  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:25.229543  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:25.229552  223217 round_trippers.go:580]     Content-Length: 3531
	I0817 21:34:25.229562  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:25 GMT
	I0817 21:34:25.229749  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"504","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0817 21:34:25.731120  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:25.731145  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:25.731154  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:25.731160  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:25.734031  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:25.734077  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:25.734089  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:25.734099  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:25.734107  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:25.734116  223217 round_trippers.go:580]     Content-Length: 3531
	I0817 21:34:25.734125  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:25 GMT
	I0817 21:34:25.734133  223217 round_trippers.go:580]     Audit-Id: c8b07ad2-c493-4122-8bd9-8a1b3ca5d933
	I0817 21:34:25.734146  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:25.734237  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"504","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0817 21:34:26.230812  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:26.230838  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:26.230846  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:26.230852  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:26.234767  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:26.234803  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:26.234815  223217 round_trippers.go:580]     Content-Length: 3531
	I0817 21:34:26.234824  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:26 GMT
	I0817 21:34:26.234833  223217 round_trippers.go:580]     Audit-Id: 3ab1f73b-cec7-40d4-961e-f33e91e34f2a
	I0817 21:34:26.234843  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:26.234858  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:26.234878  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:26.234888  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:26.235086  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"504","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0817 21:34:26.730268  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:26.730296  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:26.730307  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:26.730315  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:26.733805  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:26.733836  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:26.733849  223217 round_trippers.go:580]     Audit-Id: 51ec889f-8926-48d2-bde5-238d0ee10be9
	I0817 21:34:26.733858  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:26.733867  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:26.733878  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:26.733891  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:26.733901  223217 round_trippers.go:580]     Content-Length: 3531
	I0817 21:34:26.733911  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:26 GMT
	I0817 21:34:26.734038  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"504","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0817 21:34:27.230262  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:27.230286  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:27.230294  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:27.230300  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:27.233990  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:27.234014  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:27.234021  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:27.234027  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:27.234032  223217 round_trippers.go:580]     Content-Length: 3531
	I0817 21:34:27.234038  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:27 GMT
	I0817 21:34:27.234043  223217 round_trippers.go:580]     Audit-Id: 8c9f9c89-5ae4-4d8f-bfd9-23121f0ea170
	I0817 21:34:27.234049  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:27.234073  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:27.234169  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"504","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0817 21:34:27.234452  223217 node_ready.go:58] node "multinode-959371-m02" has status "Ready":"False"
	I0817 21:34:27.730372  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:27.730425  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:27.730435  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:27.730441  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:27.733703  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:27.733740  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:27.733750  223217 round_trippers.go:580]     Audit-Id: fe699145-1796-4c4e-be1b-7180d9e47224
	I0817 21:34:27.733759  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:27.733767  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:27.733775  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:27.733785  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:27.733793  223217 round_trippers.go:580]     Content-Length: 3531
	I0817 21:34:27.733802  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:27 GMT
	I0817 21:34:27.733950  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"504","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0817 21:34:28.230487  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:28.230520  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:28.230534  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:28.230544  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:28.233881  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:28.233909  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:28.233916  223217 round_trippers.go:580]     Content-Length: 3531
	I0817 21:34:28.233922  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:28 GMT
	I0817 21:34:28.233927  223217 round_trippers.go:580]     Audit-Id: 486fea1c-c86b-4e2d-b49c-25169a8367c7
	I0817 21:34:28.233932  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:28.233938  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:28.233944  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:28.233953  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:28.234048  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"504","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0817 21:34:28.730272  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:28.730302  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:28.730315  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:28.730339  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:28.734124  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:28.734154  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:28.734172  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:28.734188  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:28.734197  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:28.734207  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:28.734216  223217 round_trippers.go:580]     Content-Length: 3640
	I0817 21:34:28.734226  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:28 GMT
	I0817 21:34:28.734235  223217 round_trippers.go:580]     Audit-Id: 7e81a4fa-be31-4822-8368-692c08e2c8cf
	I0817 21:34:28.734329  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"513","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0817 21:34:29.231024  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:29.231057  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:29.231069  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:29.231078  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:29.234696  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:29.234733  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:29.234746  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:29 GMT
	I0817 21:34:29.234755  223217 round_trippers.go:580]     Audit-Id: e546daca-96c5-4ec9-bd9e-ea48982837b8
	I0817 21:34:29.234764  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:29.234777  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:29.234789  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:29.234802  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:29.234813  223217 round_trippers.go:580]     Content-Length: 3640
	I0817 21:34:29.234880  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"513","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0817 21:34:29.235165  223217 node_ready.go:58] node "multinode-959371-m02" has status "Ready":"False"
	I0817 21:34:29.730341  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:29.730364  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:29.730372  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:29.730379  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:29.733369  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:29.733391  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:29.733397  223217 round_trippers.go:580]     Content-Length: 3640
	I0817 21:34:29.733403  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:29 GMT
	I0817 21:34:29.733409  223217 round_trippers.go:580]     Audit-Id: f4846a56-0635-4f3d-9929-7e7369328c86
	I0817 21:34:29.733414  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:29.733420  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:29.733426  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:29.733431  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:29.733495  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"513","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0817 21:34:30.230398  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:30.230424  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:30.230432  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:30.230440  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:30.233886  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:30.233914  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:30.233921  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:30.233927  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:30.233933  223217 round_trippers.go:580]     Content-Length: 3640
	I0817 21:34:30.233938  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:30 GMT
	I0817 21:34:30.233943  223217 round_trippers.go:580]     Audit-Id: 46dc5e0b-0be4-4e87-90a4-48d183f8e2de
	I0817 21:34:30.233948  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:30.233954  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:30.234040  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"513","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0817 21:34:30.730249  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:30.730273  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:30.730285  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:30.730297  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:30.733997  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:30.734034  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:30.734045  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:30.734065  223217 round_trippers.go:580]     Content-Length: 3640
	I0817 21:34:30.734074  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:30 GMT
	I0817 21:34:30.734082  223217 round_trippers.go:580]     Audit-Id: 7ae7a9e1-50f4-459a-8597-f229cb3a1bd7
	I0817 21:34:30.734097  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:30.734107  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:30.734116  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:30.734228  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"513","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0817 21:34:31.230246  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:31.230278  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:31.230288  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:31.230296  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:31.234161  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:31.234186  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:31.234194  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:31.234200  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:31.234205  223217 round_trippers.go:580]     Content-Length: 3640
	I0817 21:34:31.234211  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:31 GMT
	I0817 21:34:31.234216  223217 round_trippers.go:580]     Audit-Id: 1eaa9172-1d9b-48da-9172-19897bb681fb
	I0817 21:34:31.234221  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:31.234226  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:31.234417  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"513","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0817 21:34:31.730529  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:31.730557  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:31.730571  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:31.730577  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:31.734142  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:31.734176  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:31.734187  223217 round_trippers.go:580]     Audit-Id: 9a4f60c9-1688-48ba-b30a-cfc0844a6406
	I0817 21:34:31.734197  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:31.734206  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:31.734214  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:31.734224  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:31.734236  223217 round_trippers.go:580]     Content-Length: 3640
	I0817 21:34:31.734247  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:31 GMT
	I0817 21:34:31.734361  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"513","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0817 21:34:31.734652  223217 node_ready.go:58] node "multinode-959371-m02" has status "Ready":"False"
	I0817 21:34:32.231008  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:32.231039  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.231052  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.231061  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.234093  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:32.234127  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.234138  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.234144  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.234149  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.234155  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.234164  223217 round_trippers.go:580]     Content-Length: 3640
	I0817 21:34:32.234173  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.234181  223217 round_trippers.go:580]     Audit-Id: a580bd99-537a-48c4-ad85-15a95acdcb38
	I0817 21:34:32.234246  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"513","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0817 21:34:32.730868  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:32.730895  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.730907  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.730915  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.733721  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:32.733746  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.733754  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.733760  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.733765  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.733774  223217 round_trippers.go:580]     Content-Length: 3726
	I0817 21:34:32.733783  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.733791  223217 round_trippers.go:580]     Audit-Id: ac38790b-cb96-4edc-83ff-faa771560a20
	I0817 21:34:32.733799  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.733979  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"530","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I0817 21:34:32.734282  223217 node_ready.go:49] node "multinode-959371-m02" has status "Ready":"True"
	I0817 21:34:32.734301  223217 node_ready.go:38] duration metric: took 7.511527156s waiting for node "multinode-959371-m02" to be "Ready" ...
	I0817 21:34:32.734322  223217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:34:32.734392  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:34:32.734402  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.734414  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.734428  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.737726  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:32.737745  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.737755  223217 round_trippers.go:580]     Audit-Id: c74ffcb6-2ef7-4cd6-a5c4-558dc8cb7dce
	I0817 21:34:32.737765  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.737778  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.737791  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.737804  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.737816  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.739369  223217 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"447","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67374 chars]
	I0817 21:34:32.741392  223217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:32.741475  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:34:32.741482  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.741491  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.741499  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.743744  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:32.743765  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.743774  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.743782  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.743790  223217 round_trippers.go:580]     Audit-Id: d5d6260b-3ecc-4837-9f12-1553c4cebdb2
	I0817 21:34:32.743798  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.743805  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.743814  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.744008  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"447","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0817 21:34:32.744485  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:34:32.744499  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.744507  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.744513  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.746486  223217 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:34:32.746499  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.746505  223217 round_trippers.go:580]     Audit-Id: 4b186b07-f62e-4639-acd3-d4e7d5068bd1
	I0817 21:34:32.746510  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.746515  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.746520  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.746526  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.746534  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.746713  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:34:32.746999  223217 pod_ready.go:92] pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace has status "Ready":"True"
	I0817 21:34:32.747012  223217 pod_ready.go:81] duration metric: took 5.600334ms waiting for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:32.747021  223217 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:32.747067  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-959371
	I0817 21:34:32.747073  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.747080  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.747088  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.748946  223217 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:34:32.748966  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.748976  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.748986  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.748995  223217 round_trippers.go:580]     Audit-Id: d05cb798-b628-4617-b250-f7874e3003f5
	I0817 21:34:32.749006  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.749018  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.749030  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.749153  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-959371","namespace":"kube-system","uid":"0ffe6db5-4285-4788-88b2-073753ece5f3","resourceVersion":"441","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.104:2379","kubernetes.io/config.hash":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.mirror":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.seen":"2023-08-17T21:33:26.519088298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0817 21:34:32.749610  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:34:32.749626  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.749633  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.749640  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.751560  223217 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:34:32.751576  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.751582  223217 round_trippers.go:580]     Audit-Id: 98a53fad-fd9f-4129-baf3-e41954fd453a
	I0817 21:34:32.751588  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.751593  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.751598  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.751603  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.751609  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.751878  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:34:32.752145  223217 pod_ready.go:92] pod "etcd-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:34:32.752158  223217 pod_ready.go:81] duration metric: took 5.129406ms waiting for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:32.752171  223217 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:32.752222  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-959371
	I0817 21:34:32.752230  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.752236  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.752242  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.754795  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:32.754810  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.754816  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.754822  223217 round_trippers.go:580]     Audit-Id: 4fb4b273-9daa-46cb-a75b-491b190dd95d
	I0817 21:34:32.754827  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.754841  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.754853  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.754862  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.755027  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-959371","namespace":"kube-system","uid":"0efb1ae7-705a-47df-91c6-0d9390b68983","resourceVersion":"442","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.104:8443","kubernetes.io/config.hash":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.mirror":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.seen":"2023-08-17T21:33:26.519082064Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0817 21:34:32.755373  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:34:32.755383  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.755390  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.755395  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.757485  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:32.757504  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.757512  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.757521  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.757530  223217 round_trippers.go:580]     Audit-Id: bba2b8ef-6c5d-4397-8443-87213da88fb9
	I0817 21:34:32.757538  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.757545  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.757556  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.757730  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:34:32.757989  223217 pod_ready.go:92] pod "kube-apiserver-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:34:32.758001  223217 pod_ready.go:81] duration metric: took 5.821216ms waiting for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:32.758009  223217 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:32.758046  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:34:32.758070  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.758081  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.758093  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.760222  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:32.760240  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.760249  223217 round_trippers.go:580]     Audit-Id: c9525593-11ff-4373-8698-a85544ca491d
	I0817 21:34:32.760260  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.760268  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.760277  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.760285  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.760293  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.760574  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"443","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0817 21:34:32.760901  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:34:32.760911  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.760918  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.760924  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.762895  223217 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:34:32.762908  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.762914  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.762919  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.762924  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.762929  223217 round_trippers.go:580]     Audit-Id: 2acc9357-cf9a-4a33-aa29-6fa2dea71ee2
	I0817 21:34:32.762934  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.762939  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.763175  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:34:32.763423  223217 pod_ready.go:92] pod "kube-controller-manager-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:34:32.763434  223217 pod_ready.go:81] duration metric: took 5.420317ms waiting for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:32.763442  223217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:32.931858  223217 request.go:628] Waited for 168.336634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:34:32.931946  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:34:32.931953  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:32.931965  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:32.931975  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:32.934860  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:32.934879  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:32.934888  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:32 GMT
	I0817 21:34:32.934893  223217 round_trippers.go:580]     Audit-Id: 46dd3ad2-9ace-458c-88b9-99e2b7f1e11f
	I0817 21:34:32.934899  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:32.934904  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:32.934909  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:32.934915  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:32.935098  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gdf7","generateName":"kube-proxy-","namespace":"kube-system","uid":"00e6f433-51d6-49bb-a927-780720361eb3","resourceVersion":"413","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0817 21:34:33.130911  223217 request.go:628] Waited for 195.282354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:34:33.130989  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:34:33.130994  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:33.131003  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:33.131009  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:33.133973  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:33.133992  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:33.134000  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:33 GMT
	I0817 21:34:33.134006  223217 round_trippers.go:580]     Audit-Id: 44f62c41-1f7b-494a-8cfd-812ff6c662c3
	I0817 21:34:33.134012  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:33.134017  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:33.134023  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:33.134028  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:33.134181  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:34:33.134589  223217 pod_ready.go:92] pod "kube-proxy-8gdf7" in "kube-system" namespace has status "Ready":"True"
	I0817 21:34:33.134612  223217 pod_ready.go:81] duration metric: took 371.163905ms waiting for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:33.134626  223217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:33.330992  223217 request.go:628] Waited for 196.282573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:34:33.331073  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:34:33.331081  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:33.331089  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:33.331096  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:33.337353  223217 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0817 21:34:33.337379  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:33.337389  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:33.337397  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:33.337404  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:33 GMT
	I0817 21:34:33.337412  223217 round_trippers.go:580]     Audit-Id: 6abcb1dc-fd3f-499e-a2ba-4c5f22d97110
	I0817 21:34:33.337420  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:33.337428  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:33.337602  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zmldj","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac59040d-df0c-416f-9660-4a41f7b75789","resourceVersion":"519","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0817 21:34:33.531422  223217 request.go:628] Waited for 193.286553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:33.531486  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:34:33.531493  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:33.531503  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:33.531512  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:33.534424  223217 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:34:33.534447  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:33.534454  223217 round_trippers.go:580]     Content-Length: 3606
	I0817 21:34:33.534460  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:33 GMT
	I0817 21:34:33.534469  223217 round_trippers.go:580]     Audit-Id: d93577c2-98af-46b7-862d-970f274a112c
	I0817 21:34:33.534479  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:33.534484  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:33.534489  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:33.534498  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:33.534721  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"531","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2582 chars]
	I0817 21:34:33.534958  223217 pod_ready.go:92] pod "kube-proxy-zmldj" in "kube-system" namespace has status "Ready":"True"
	I0817 21:34:33.534971  223217 pod_ready.go:81] duration metric: took 400.33848ms waiting for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:33.534981  223217 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:33.731458  223217 request.go:628] Waited for 196.384094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:34:33.731546  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:34:33.731555  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:33.731565  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:33.731575  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:33.735034  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:33.735054  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:33.735064  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:33.735072  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:33.735079  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:33.735088  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:33 GMT
	I0817 21:34:33.735097  223217 round_trippers.go:580]     Audit-Id: a26e2a1e-94f6-4cf0-a58e-678fb866b65b
	I0817 21:34:33.735109  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:33.735220  223217 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-959371","namespace":"kube-system","uid":"a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2","resourceVersion":"349","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.mirror":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.seen":"2023-08-17T21:33:26.519087461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0817 21:34:33.930996  223217 request.go:628] Waited for 195.294589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:34:33.931064  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:34:33.931068  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:33.931076  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:33.931084  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:33.934104  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:33.934132  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:33.934145  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:33.934154  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:33 GMT
	I0817 21:34:33.934161  223217 round_trippers.go:580]     Audit-Id: 7e08690b-7dd6-484f-8d56-a44c193ade64
	I0817 21:34:33.934168  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:33.934176  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:33.934184  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:33.934329  223217 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0817 21:34:33.934739  223217 pod_ready.go:92] pod "kube-scheduler-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:34:33.934767  223217 pod_ready.go:81] duration metric: took 399.774343ms waiting for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:34:33.934782  223217 pod_ready.go:38] duration metric: took 1.200443529s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:34:33.934799  223217 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:34:33.934855  223217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:34:33.950004  223217 system_svc.go:56] duration metric: took 15.189363ms WaitForService to wait for kubelet.
	I0817 21:34:33.950044  223217 kubeadm.go:581] duration metric: took 8.746656805s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:34:33.950087  223217 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:34:34.131585  223217 request.go:628] Waited for 181.392267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I0817 21:34:34.131686  223217 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I0817 21:34:34.131693  223217 round_trippers.go:469] Request Headers:
	I0817 21:34:34.131705  223217 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:34:34.131716  223217 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:34:34.135110  223217 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:34:34.135143  223217 round_trippers.go:577] Response Headers:
	I0817 21:34:34.135156  223217 round_trippers.go:580]     Audit-Id: b100c720-e3a2-4d00-82a3-0b08f3a25670
	I0817 21:34:34.135164  223217 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:34:34.135171  223217 round_trippers.go:580]     Content-Type: application/json
	I0817 21:34:34.135179  223217 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:34:34.135187  223217 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:34:34.135195  223217 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:34:34 GMT
	I0817 21:34:34.135378  223217 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"425","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9526 chars]
	I0817 21:34:34.136006  223217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:34:34.136030  223217 node_conditions.go:123] node cpu capacity is 2
	I0817 21:34:34.136045  223217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:34:34.136051  223217 node_conditions.go:123] node cpu capacity is 2
	I0817 21:34:34.136058  223217 node_conditions.go:105] duration metric: took 185.964748ms to run NodePressure ...
	I0817 21:34:34.136077  223217 start.go:228] waiting for startup goroutines ...
	I0817 21:34:34.136114  223217 start.go:242] writing updated cluster config ...
	I0817 21:34:34.136510  223217 ssh_runner.go:195] Run: rm -f paused
	I0817 21:34:34.188551  223217 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 21:34:34.190955  223217 out.go:177] * Done! kubectl is now configured to use "multinode-959371" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 21:32:52 UTC, ends at Thu 2023-08-17 21:34:41 UTC. --
	Aug 17 21:34:40 multinode-959371 crio[715]: time="2023-08-17 21:34:40.959465597Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bd1415cc5cf4cdd4559f928eb7da97d3079f4dab62bfd60ed304d181c8d41f7c,Metadata:&PodSandboxMetadata{Name:busybox-67b7f59bb-9c77m,Uid:4b9baf46-70e7-4d95-b774-9c12c6970154,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308075386136780,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,pod-template-hash: 67b7f59bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:34:35.048256569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c57828b3cef151516ee5c2e0f437e515526462a863bdc6fd34c14e3f9d6e66a,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-87rlb,Uid:52da85e0-72f0-4919-8615-d1cb46b65ca4,Namespace:kube-system,Attempt:0,},
State:SANDBOX_READY,CreatedAt:1692308025793400336,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:33:44.859375135Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3aaec7514fd869524c35c86aba7faa267a5c383a97fd1d1ae6147096ba688b32,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e8aa1192-3588-49da-be88-15a801d006fc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308025207693675,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]strin
g{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-17T21:33:44.874576534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2942d4e60fbf716e7eca64d7080f14e01b6e234a8af212ab4a78a082d1a7f36,Metadata:&PodSandboxMetadata{Name:kube-proxy-8gdf7,Uid:00e6f433-51d6-49bb-a927-780720361eb3,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1692308021108286017,Labels:map[string]string{controller-revision-hash: 86cc8bcbf7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-780720361eb3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:33:39.277405538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab40b5f8683a7484190a69c3bd0fedcab99a8bb556becf6becae55257f7777ef,Metadata:&PodSandboxMetadata{Name:kindnet-s7l7j,Uid:6af177c8-cc30-4a86-98d8-443cef5036d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308020185598125,Labels:map[string]string{app: kindnet,controller-revision-hash: 575d9d6996,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af177c8-cc30-4a86-98d8-443cef5036d8,k8s-app: kindnet,pod-template-generati
on: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:33:39.247550360Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0c0b2d5258e41d9c7f9b7d03ee4aae0a02829c45f4e227f411a0d96ce9e5d4e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-959371,Uid:8e691503605658781b8470b3d4d7c7b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692307997639388210,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8e691503605658781b8470b3d4d7c7b0,kubernetes.io/config.seen: 2023-08-17T21:33:17.080877270Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:71579d7ad3e84d59c954415af323b5713241963f19cd5fc28274dd4f246a45ff,Metadata:&PodSandboxMetadata
{Name:kube-apiserver-multinode-959371,Uid:1844dfd193c27ced8aa4dba039096475,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692307997631103910,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.104:8443,kubernetes.io/config.hash: 1844dfd193c27ced8aa4dba039096475,kubernetes.io/config.seen: 2023-08-17T21:33:17.080876211Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:69c9720229964900e7861fa7ede36ad5cb4b0d9663c65754f0e600911b65541d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-959371,Uid:010b5eeb8ae476ddfe7bf4d61569f753,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692307997617141610,Labels:map[string]string{component: kube-scheduler,
io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 010b5eeb8ae476ddfe7bf4d61569f753,kubernetes.io/config.seen: 2023-08-17T21:33:17.080878464Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d34044785cea7c3daae96bbef50f597731ff208d5e4761dcc66e41f1e5312d6b,Metadata:&PodSandboxMetadata{Name:etcd-multinode-959371,Uid:524855ea42058e731111bcfa912d2dbe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692307997563858568,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.104:2379,kubernet
es.io/config.hash: 524855ea42058e731111bcfa912d2dbe,kubernetes.io/config.seen: 2023-08-17T21:33:17.080872510Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=418ebbb6-b6e1-4a57-92c2-ff9e74f967c8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 21:34:40 multinode-959371 crio[715]: time="2023-08-17 21:34:40.960611823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4f4599a4-ebe0-41c9-bbbd-463e15146ed3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 21:34:40 multinode-959371 crio[715]: time="2023-08-17 21:34:40.960752260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4f4599a4-ebe0-41c9-bbbd-463e15146ed3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 21:34:40 multinode-959371 crio[715]: time="2023-08-17 21:34:40.960967002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e37a2bae57665eb64f351fb60e6f57fe41bad90968c93cda22d0e10e76347ef,PodSandboxId:bd1415cc5cf4cdd4559f928eb7da97d3079f4dab62bfd60ed304d181c8d41f7c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308076907675924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f3de67e57847d4f8376f1f88b2af25e4df9ebd3aa4f9d43e29d5ccb18e359,PodSandboxId:6c57828b3cef151516ee5c2e0f437e515526462a863bdc6fd34c14e3f9d6e66a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308026366524822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa8781a32c0f75bc20936cd5a807ef33dd8b61cb68c4df453426b2eb994b6b,PodSandboxId:3aaec7514fd869524c35c86aba7faa267a5c383a97fd1d1ae6147096ba688b32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308025524750097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da316e6e49da5c5d3fb1e58f3f3dfd93d8c33954750b30822efd9f7375deb71,PodSandboxId:ab40b5f8683a7484190a69c3bd0fedcab99a8bb556becf6becae55257f7777ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308023147866075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d4448d66bec13edb19e2a345e7753b03d93b804125612c6b7c67b497fcb62b,PodSandboxId:b2942d4e60fbf716e7eca64d7080f14e01b6e234a8af212ab4a78a082d1a7f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308021480965428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-7807203
61eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253ccdc607d0f73b4ba34e6c744d797263fd7eca8d2faf63a6a1dd12485a8396,PodSandboxId:d34044785cea7c3daae96bbef50f597731ff208d5e4761dcc66e41f1e5312d6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692307998757551058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.
container.hash: 69f53455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f71205c67a4eef9cbea4c0745f9d527061c25423543c30c8955966ec901662,PodSandboxId:69c9720229964900e7861fa7ede36ad5cb4b0d9663c65754f0e600911b65541d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692307998705943702,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac460f2e536d2f68faa81716b2b7d18456f6f2ad149e680a11939702374547a,PodSandboxId:d0c0b2d5258e41d9c7f9b7d03ee4aae0a02829c45f4e227f411a0d96ce9e5d4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692307998263035832,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io
.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062,PodSandboxId:71579d7ad3e84d59c954415af323b5713241963f19cd5fc28274dd4f246a45ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692307998032936037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.
container.hash: 2ea5cdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4f4599a4-ebe0-41c9-bbbd-463e15146ed3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.096966459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a38d959b-15fa-44c1-9923-5efbc330b930 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.097063840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a38d959b-15fa-44c1-9923-5efbc330b930 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.097304162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e37a2bae57665eb64f351fb60e6f57fe41bad90968c93cda22d0e10e76347ef,PodSandboxId:bd1415cc5cf4cdd4559f928eb7da97d3079f4dab62bfd60ed304d181c8d41f7c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308076907675924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f3de67e57847d4f8376f1f88b2af25e4df9ebd3aa4f9d43e29d5ccb18e359,PodSandboxId:6c57828b3cef151516ee5c2e0f437e515526462a863bdc6fd34c14e3f9d6e66a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308026366524822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa8781a32c0f75bc20936cd5a807ef33dd8b61cb68c4df453426b2eb994b6b,PodSandboxId:3aaec7514fd869524c35c86aba7faa267a5c383a97fd1d1ae6147096ba688b32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308025524750097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da316e6e49da5c5d3fb1e58f3f3dfd93d8c33954750b30822efd9f7375deb71,PodSandboxId:ab40b5f8683a7484190a69c3bd0fedcab99a8bb556becf6becae55257f7777ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308023147866075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d4448d66bec13edb19e2a345e7753b03d93b804125612c6b7c67b497fcb62b,PodSandboxId:b2942d4e60fbf716e7eca64d7080f14e01b6e234a8af212ab4a78a082d1a7f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308021480965428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-7807203
61eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253ccdc607d0f73b4ba34e6c744d797263fd7eca8d2faf63a6a1dd12485a8396,PodSandboxId:d34044785cea7c3daae96bbef50f597731ff208d5e4761dcc66e41f1e5312d6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692307998757551058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.
container.hash: 69f53455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f71205c67a4eef9cbea4c0745f9d527061c25423543c30c8955966ec901662,PodSandboxId:69c9720229964900e7861fa7ede36ad5cb4b0d9663c65754f0e600911b65541d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692307998705943702,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac460f2e536d2f68faa81716b2b7d18456f6f2ad149e680a11939702374547a,PodSandboxId:d0c0b2d5258e41d9c7f9b7d03ee4aae0a02829c45f4e227f411a0d96ce9e5d4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692307998263035832,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io
.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062,PodSandboxId:71579d7ad3e84d59c954415af323b5713241963f19cd5fc28274dd4f246a45ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692307998032936037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.
container.hash: 2ea5cdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a38d959b-15fa-44c1-9923-5efbc330b930 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.132473822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5b7240de-6801-4d82-a506-8c1e97429546 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.132573575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5b7240de-6801-4d82-a506-8c1e97429546 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.132878776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e37a2bae57665eb64f351fb60e6f57fe41bad90968c93cda22d0e10e76347ef,PodSandboxId:bd1415cc5cf4cdd4559f928eb7da97d3079f4dab62bfd60ed304d181c8d41f7c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308076907675924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f3de67e57847d4f8376f1f88b2af25e4df9ebd3aa4f9d43e29d5ccb18e359,PodSandboxId:6c57828b3cef151516ee5c2e0f437e515526462a863bdc6fd34c14e3f9d6e66a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308026366524822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa8781a32c0f75bc20936cd5a807ef33dd8b61cb68c4df453426b2eb994b6b,PodSandboxId:3aaec7514fd869524c35c86aba7faa267a5c383a97fd1d1ae6147096ba688b32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308025524750097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da316e6e49da5c5d3fb1e58f3f3dfd93d8c33954750b30822efd9f7375deb71,PodSandboxId:ab40b5f8683a7484190a69c3bd0fedcab99a8bb556becf6becae55257f7777ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308023147866075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d4448d66bec13edb19e2a345e7753b03d93b804125612c6b7c67b497fcb62b,PodSandboxId:b2942d4e60fbf716e7eca64d7080f14e01b6e234a8af212ab4a78a082d1a7f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308021480965428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-7807203
61eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253ccdc607d0f73b4ba34e6c744d797263fd7eca8d2faf63a6a1dd12485a8396,PodSandboxId:d34044785cea7c3daae96bbef50f597731ff208d5e4761dcc66e41f1e5312d6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692307998757551058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.
container.hash: 69f53455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f71205c67a4eef9cbea4c0745f9d527061c25423543c30c8955966ec901662,PodSandboxId:69c9720229964900e7861fa7ede36ad5cb4b0d9663c65754f0e600911b65541d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692307998705943702,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac460f2e536d2f68faa81716b2b7d18456f6f2ad149e680a11939702374547a,PodSandboxId:d0c0b2d5258e41d9c7f9b7d03ee4aae0a02829c45f4e227f411a0d96ce9e5d4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692307998263035832,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io
.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062,PodSandboxId:71579d7ad3e84d59c954415af323b5713241963f19cd5fc28274dd4f246a45ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692307998032936037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.
container.hash: 2ea5cdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5b7240de-6801-4d82-a506-8c1e97429546 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.168559467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=df84cf68-ef71-4966-9e4a-e5cad6d9fcb8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.168733912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=df84cf68-ef71-4966-9e4a-e5cad6d9fcb8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.169038714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e37a2bae57665eb64f351fb60e6f57fe41bad90968c93cda22d0e10e76347ef,PodSandboxId:bd1415cc5cf4cdd4559f928eb7da97d3079f4dab62bfd60ed304d181c8d41f7c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308076907675924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f3de67e57847d4f8376f1f88b2af25e4df9ebd3aa4f9d43e29d5ccb18e359,PodSandboxId:6c57828b3cef151516ee5c2e0f437e515526462a863bdc6fd34c14e3f9d6e66a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308026366524822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa8781a32c0f75bc20936cd5a807ef33dd8b61cb68c4df453426b2eb994b6b,PodSandboxId:3aaec7514fd869524c35c86aba7faa267a5c383a97fd1d1ae6147096ba688b32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308025524750097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da316e6e49da5c5d3fb1e58f3f3dfd93d8c33954750b30822efd9f7375deb71,PodSandboxId:ab40b5f8683a7484190a69c3bd0fedcab99a8bb556becf6becae55257f7777ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308023147866075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d4448d66bec13edb19e2a345e7753b03d93b804125612c6b7c67b497fcb62b,PodSandboxId:b2942d4e60fbf716e7eca64d7080f14e01b6e234a8af212ab4a78a082d1a7f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308021480965428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-7807203
61eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253ccdc607d0f73b4ba34e6c744d797263fd7eca8d2faf63a6a1dd12485a8396,PodSandboxId:d34044785cea7c3daae96bbef50f597731ff208d5e4761dcc66e41f1e5312d6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692307998757551058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.
container.hash: 69f53455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f71205c67a4eef9cbea4c0745f9d527061c25423543c30c8955966ec901662,PodSandboxId:69c9720229964900e7861fa7ede36ad5cb4b0d9663c65754f0e600911b65541d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692307998705943702,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac460f2e536d2f68faa81716b2b7d18456f6f2ad149e680a11939702374547a,PodSandboxId:d0c0b2d5258e41d9c7f9b7d03ee4aae0a02829c45f4e227f411a0d96ce9e5d4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692307998263035832,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io
.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062,PodSandboxId:71579d7ad3e84d59c954415af323b5713241963f19cd5fc28274dd4f246a45ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692307998032936037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.
container.hash: 2ea5cdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=df84cf68-ef71-4966-9e4a-e5cad6d9fcb8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.203381777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=80ab56d3-46e0-4782-b71c-6c5002c7fb86 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.203453866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=80ab56d3-46e0-4782-b71c-6c5002c7fb86 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.203858934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e37a2bae57665eb64f351fb60e6f57fe41bad90968c93cda22d0e10e76347ef,PodSandboxId:bd1415cc5cf4cdd4559f928eb7da97d3079f4dab62bfd60ed304d181c8d41f7c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308076907675924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f3de67e57847d4f8376f1f88b2af25e4df9ebd3aa4f9d43e29d5ccb18e359,PodSandboxId:6c57828b3cef151516ee5c2e0f437e515526462a863bdc6fd34c14e3f9d6e66a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308026366524822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa8781a32c0f75bc20936cd5a807ef33dd8b61cb68c4df453426b2eb994b6b,PodSandboxId:3aaec7514fd869524c35c86aba7faa267a5c383a97fd1d1ae6147096ba688b32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308025524750097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da316e6e49da5c5d3fb1e58f3f3dfd93d8c33954750b30822efd9f7375deb71,PodSandboxId:ab40b5f8683a7484190a69c3bd0fedcab99a8bb556becf6becae55257f7777ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308023147866075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d4448d66bec13edb19e2a345e7753b03d93b804125612c6b7c67b497fcb62b,PodSandboxId:b2942d4e60fbf716e7eca64d7080f14e01b6e234a8af212ab4a78a082d1a7f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308021480965428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-7807203
61eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253ccdc607d0f73b4ba34e6c744d797263fd7eca8d2faf63a6a1dd12485a8396,PodSandboxId:d34044785cea7c3daae96bbef50f597731ff208d5e4761dcc66e41f1e5312d6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692307998757551058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.
container.hash: 69f53455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f71205c67a4eef9cbea4c0745f9d527061c25423543c30c8955966ec901662,PodSandboxId:69c9720229964900e7861fa7ede36ad5cb4b0d9663c65754f0e600911b65541d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692307998705943702,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac460f2e536d2f68faa81716b2b7d18456f6f2ad149e680a11939702374547a,PodSandboxId:d0c0b2d5258e41d9c7f9b7d03ee4aae0a02829c45f4e227f411a0d96ce9e5d4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692307998263035832,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io
.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062,PodSandboxId:71579d7ad3e84d59c954415af323b5713241963f19cd5fc28274dd4f246a45ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692307998032936037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.
container.hash: 2ea5cdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=80ab56d3-46e0-4782-b71c-6c5002c7fb86 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.240903576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c7cea9be-4c47-407d-83a6-6a0a48082197 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.240971160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c7cea9be-4c47-407d-83a6-6a0a48082197 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.241194436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e37a2bae57665eb64f351fb60e6f57fe41bad90968c93cda22d0e10e76347ef,PodSandboxId:bd1415cc5cf4cdd4559f928eb7da97d3079f4dab62bfd60ed304d181c8d41f7c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308076907675924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f3de67e57847d4f8376f1f88b2af25e4df9ebd3aa4f9d43e29d5ccb18e359,PodSandboxId:6c57828b3cef151516ee5c2e0f437e515526462a863bdc6fd34c14e3f9d6e66a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308026366524822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa8781a32c0f75bc20936cd5a807ef33dd8b61cb68c4df453426b2eb994b6b,PodSandboxId:3aaec7514fd869524c35c86aba7faa267a5c383a97fd1d1ae6147096ba688b32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308025524750097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da316e6e49da5c5d3fb1e58f3f3dfd93d8c33954750b30822efd9f7375deb71,PodSandboxId:ab40b5f8683a7484190a69c3bd0fedcab99a8bb556becf6becae55257f7777ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308023147866075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d4448d66bec13edb19e2a345e7753b03d93b804125612c6b7c67b497fcb62b,PodSandboxId:b2942d4e60fbf716e7eca64d7080f14e01b6e234a8af212ab4a78a082d1a7f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308021480965428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-7807203
61eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253ccdc607d0f73b4ba34e6c744d797263fd7eca8d2faf63a6a1dd12485a8396,PodSandboxId:d34044785cea7c3daae96bbef50f597731ff208d5e4761dcc66e41f1e5312d6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692307998757551058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.
container.hash: 69f53455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f71205c67a4eef9cbea4c0745f9d527061c25423543c30c8955966ec901662,PodSandboxId:69c9720229964900e7861fa7ede36ad5cb4b0d9663c65754f0e600911b65541d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692307998705943702,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac460f2e536d2f68faa81716b2b7d18456f6f2ad149e680a11939702374547a,PodSandboxId:d0c0b2d5258e41d9c7f9b7d03ee4aae0a02829c45f4e227f411a0d96ce9e5d4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692307998263035832,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io
.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062,PodSandboxId:71579d7ad3e84d59c954415af323b5713241963f19cd5fc28274dd4f246a45ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692307998032936037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.
container.hash: 2ea5cdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c7cea9be-4c47-407d-83a6-6a0a48082197 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.278801879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=afc90dd8-f04f-4fab-b3f4-d1ce39194f97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.278866595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=afc90dd8-f04f-4fab-b3f4-d1ce39194f97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.279083895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e37a2bae57665eb64f351fb60e6f57fe41bad90968c93cda22d0e10e76347ef,PodSandboxId:bd1415cc5cf4cdd4559f928eb7da97d3079f4dab62bfd60ed304d181c8d41f7c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308076907675924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f3de67e57847d4f8376f1f88b2af25e4df9ebd3aa4f9d43e29d5ccb18e359,PodSandboxId:6c57828b3cef151516ee5c2e0f437e515526462a863bdc6fd34c14e3f9d6e66a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308026366524822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa8781a32c0f75bc20936cd5a807ef33dd8b61cb68c4df453426b2eb994b6b,PodSandboxId:3aaec7514fd869524c35c86aba7faa267a5c383a97fd1d1ae6147096ba688b32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308025524750097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da316e6e49da5c5d3fb1e58f3f3dfd93d8c33954750b30822efd9f7375deb71,PodSandboxId:ab40b5f8683a7484190a69c3bd0fedcab99a8bb556becf6becae55257f7777ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308023147866075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d4448d66bec13edb19e2a345e7753b03d93b804125612c6b7c67b497fcb62b,PodSandboxId:b2942d4e60fbf716e7eca64d7080f14e01b6e234a8af212ab4a78a082d1a7f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308021480965428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-7807203
61eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253ccdc607d0f73b4ba34e6c744d797263fd7eca8d2faf63a6a1dd12485a8396,PodSandboxId:d34044785cea7c3daae96bbef50f597731ff208d5e4761dcc66e41f1e5312d6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692307998757551058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.
container.hash: 69f53455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f71205c67a4eef9cbea4c0745f9d527061c25423543c30c8955966ec901662,PodSandboxId:69c9720229964900e7861fa7ede36ad5cb4b0d9663c65754f0e600911b65541d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692307998705943702,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac460f2e536d2f68faa81716b2b7d18456f6f2ad149e680a11939702374547a,PodSandboxId:d0c0b2d5258e41d9c7f9b7d03ee4aae0a02829c45f4e227f411a0d96ce9e5d4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692307998263035832,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io
.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062,PodSandboxId:71579d7ad3e84d59c954415af323b5713241963f19cd5fc28274dd4f246a45ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692307998032936037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.
container.hash: 2ea5cdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=afc90dd8-f04f-4fab-b3f4-d1ce39194f97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.314367558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=86576977-e0b9-4b25-a5ed-824d0d1bdf8b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.314469279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=86576977-e0b9-4b25-a5ed-824d0d1bdf8b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:34:41 multinode-959371 crio[715]: time="2023-08-17 21:34:41.314747950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e37a2bae57665eb64f351fb60e6f57fe41bad90968c93cda22d0e10e76347ef,PodSandboxId:bd1415cc5cf4cdd4559f928eb7da97d3079f4dab62bfd60ed304d181c8d41f7c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308076907675924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e5f3de67e57847d4f8376f1f88b2af25e4df9ebd3aa4f9d43e29d5ccb18e359,PodSandboxId:6c57828b3cef151516ee5c2e0f437e515526462a863bdc6fd34c14e3f9d6e66a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308026366524822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa8781a32c0f75bc20936cd5a807ef33dd8b61cb68c4df453426b2eb994b6b,PodSandboxId:3aaec7514fd869524c35c86aba7faa267a5c383a97fd1d1ae6147096ba688b32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308025524750097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da316e6e49da5c5d3fb1e58f3f3dfd93d8c33954750b30822efd9f7375deb71,PodSandboxId:ab40b5f8683a7484190a69c3bd0fedcab99a8bb556becf6becae55257f7777ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308023147866075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d4448d66bec13edb19e2a345e7753b03d93b804125612c6b7c67b497fcb62b,PodSandboxId:b2942d4e60fbf716e7eca64d7080f14e01b6e234a8af212ab4a78a082d1a7f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308021480965428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-7807203
61eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253ccdc607d0f73b4ba34e6c744d797263fd7eca8d2faf63a6a1dd12485a8396,PodSandboxId:d34044785cea7c3daae96bbef50f597731ff208d5e4761dcc66e41f1e5312d6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692307998757551058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.
container.hash: 69f53455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f71205c67a4eef9cbea4c0745f9d527061c25423543c30c8955966ec901662,PodSandboxId:69c9720229964900e7861fa7ede36ad5cb4b0d9663c65754f0e600911b65541d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692307998705943702,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 373e41ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac460f2e536d2f68faa81716b2b7d18456f6f2ad149e680a11939702374547a,PodSandboxId:d0c0b2d5258e41d9c7f9b7d03ee4aae0a02829c45f4e227f411a0d96ce9e5d4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692307998263035832,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io
.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062,PodSandboxId:71579d7ad3e84d59c954415af323b5713241963f19cd5fc28274dd4f246a45ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692307998032936037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.
container.hash: 2ea5cdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=86576977-e0b9-4b25-a5ed-824d0d1bdf8b name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	0e37a2bae5766       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   bd1415cc5cf4c
	3e5f3de67e578       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      55 seconds ago       Running             coredns                   0                   6c57828b3cef1
	19fa8781a32c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      55 seconds ago       Running             storage-provisioner       0                   3aaec7514fd86
	3da316e6e49da       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      58 seconds ago       Running             kindnet-cni               0                   ab40b5f8683a7
	35d4448d66bec       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4                                      59 seconds ago       Running             kube-proxy                0                   b2942d4e60fbf
	253ccdc607d0f       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   d34044785cea7
	03f71205c67a4       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16                                      About a minute ago   Running             kube-scheduler            0                   69c9720229964
	aac460f2e536d       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5                                      About a minute ago   Running             kube-controller-manager   0                   d0c0b2d5258e4
	541fc380a4b09       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c                                      About a minute ago   Running             kube-apiserver            0                   71579d7ad3e84
	
	* 
	* ==> coredns [3e5f3de67e57847d4f8376f1f88b2af25e4df9ebd3aa4f9d43e29d5ccb18e359] <==
	* [INFO] 10.244.1.2:56858 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000275402s
	[INFO] 10.244.0.3:45415 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102736s
	[INFO] 10.244.0.3:43670 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002153679s
	[INFO] 10.244.0.3:50071 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012876s
	[INFO] 10.244.0.3:51590 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078301s
	[INFO] 10.244.0.3:35536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001606381s
	[INFO] 10.244.0.3:44383 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000034277s
	[INFO] 10.244.0.3:47455 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032096s
	[INFO] 10.244.0.3:37348 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120622s
	[INFO] 10.244.1.2:54381 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139736s
	[INFO] 10.244.1.2:49815 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011148s
	[INFO] 10.244.1.2:45682 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157117s
	[INFO] 10.244.1.2:56267 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107501s
	[INFO] 10.244.0.3:60005 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112766s
	[INFO] 10.244.0.3:43201 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094568s
	[INFO] 10.244.0.3:54204 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066779s
	[INFO] 10.244.0.3:52086 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079686s
	[INFO] 10.244.1.2:34388 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125985s
	[INFO] 10.244.1.2:50900 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000257795s
	[INFO] 10.244.1.2:56864 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119885s
	[INFO] 10.244.1.2:46762 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128357s
	[INFO] 10.244.0.3:41440 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134943s
	[INFO] 10.244.0.3:41341 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085386s
	[INFO] 10.244.0.3:40777 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000067059s
	[INFO] 10.244.0.3:42058 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092544s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-959371
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959371
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=multinode-959371
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_33_27_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:33:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-959371
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:33:44 +0000   Thu, 17 Aug 2023 21:33:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:33:44 +0000   Thu, 17 Aug 2023 21:33:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:33:44 +0000   Thu, 17 Aug 2023 21:33:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:33:44 +0000   Thu, 17 Aug 2023 21:33:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    multinode-959371
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcfbba5eccdf437581b014b838d975be
	  System UUID:                dcfbba5e-ccdf-4375-81b0-14b838d975be
	  Boot ID:                    ecb21ee1-089b-48d8-b001-3ca2b436b75d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-9c77m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5d78c9869d-87rlb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-multinode-959371                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kindnet-s7l7j                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      62s
	  kube-system                 kube-apiserver-multinode-959371             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-multinode-959371    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-8gdf7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-multinode-959371             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node multinode-959371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node multinode-959371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)  kubelet          Node multinode-959371 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s                kubelet          Node multinode-959371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s                kubelet          Node multinode-959371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s                kubelet          Node multinode-959371 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s                node-controller  Node multinode-959371 event: Registered Node multinode-959371 in Controller
	  Normal  NodeReady                57s                kubelet          Node multinode-959371 status is now: NodeReady
	
	
	Name:               multinode-959371-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959371-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:34:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-959371-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:34:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:34:32 +0000   Thu, 17 Aug 2023 21:34:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:34:32 +0000   Thu, 17 Aug 2023 21:34:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:34:32 +0000   Thu, 17 Aug 2023 21:34:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:34:32 +0000   Thu, 17 Aug 2023 21:34:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    multinode-959371-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a378129c4d6a4a71bb2566f0c8e30009
	  System UUID:                a378129c-4d6a-4a71-bb25-66f0c8e30009
	  Boot ID:                    089a50e0-aeb8-4119-be98-e1239ba242e0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-65x2b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-xjn26              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17s
	  kube-system                 kube-proxy-zmldj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  17s (x5 over 19s)  kubelet          Node multinode-959371-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x5 over 19s)  kubelet          Node multinode-959371-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x5 over 19s)  kubelet          Node multinode-959371-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                node-controller  Node multinode-959371-m02 event: Registered Node multinode-959371-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-959371-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug17 21:32] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077307] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.383993] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.448432] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.157201] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.989888] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug17 21:33] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.118508] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.143407] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.107188] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.211303] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +9.836273] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +9.798237] systemd-fstab-generator[1258]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [253ccdc607d0f73b4ba34e6c744d797263fd7eca8d2faf63a6a1dd12485a8396] <==
	* {"level":"info","ts":"2023-08-17T21:33:20.651Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bcba49d8b8764a98","local-member-id":"223628dc6b2f68bd","added-peer-id":"223628dc6b2f68bd","added-peer-peer-urls":["https://192.168.39.104:2380"]}
	{"level":"info","ts":"2023-08-17T21:33:20.665Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-17T21:33:20.666Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"223628dc6b2f68bd","initial-advertise-peer-urls":["https://192.168.39.104:2380"],"listen-peer-urls":["https://192.168.39.104:2380"],"advertise-client-urls":["https://192.168.39.104:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.104:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-17T21:33:20.666Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.104:2380"}
	{"level":"info","ts":"2023-08-17T21:33:20.666Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.104:2380"}
	{"level":"info","ts":"2023-08-17T21:33:20.666Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-17T21:33:21.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-17T21:33:21.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-17T21:33:21.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgPreVoteResp from 223628dc6b2f68bd at term 1"}
	{"level":"info","ts":"2023-08-17T21:33:21.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became candidate at term 2"}
	{"level":"info","ts":"2023-08-17T21:33:21.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgVoteResp from 223628dc6b2f68bd at term 2"}
	{"level":"info","ts":"2023-08-17T21:33:21.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became leader at term 2"}
	{"level":"info","ts":"2023-08-17T21:33:21.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 223628dc6b2f68bd elected leader 223628dc6b2f68bd at term 2"}
	{"level":"info","ts":"2023-08-17T21:33:21.340Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:33:21.341Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"223628dc6b2f68bd","local-member-attributes":"{Name:multinode-959371 ClientURLs:[https://192.168.39.104:2379]}","request-path":"/0/members/223628dc6b2f68bd/attributes","cluster-id":"bcba49d8b8764a98","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T21:33:21.341Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:33:21.342Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-17T21:33:21.342Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:33:21.343Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.104:2379"}
	{"level":"info","ts":"2023-08-17T21:33:21.344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T21:33:21.344Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-17T21:33:21.342Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bcba49d8b8764a98","local-member-id":"223628dc6b2f68bd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:33:21.346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:33:21.351Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:34:28.489Z","caller":"traceutil/trace.go:171","msg":"trace[453545621] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"116.860279ms","start":"2023-08-17T21:34:28.372Z","end":"2023-08-17T21:34:28.489Z","steps":["trace[453545621] 'process raft request'  (duration: 85.647258ms)","trace[453545621] 'compare'  (duration: 31.001015ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  21:34:41 up 1 min,  0 users,  load average: 1.15, 0.46, 0.17
	Linux multinode-959371 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [3da316e6e49da5c5d3fb1e58f3f3dfd93d8c33954750b30822efd9f7375deb71] <==
	* I0817 21:33:43.906753       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0817 21:33:43.906850       1 main.go:107] hostIP = 192.168.39.104
	podIP = 192.168.39.104
	I0817 21:33:43.907126       1 main.go:116] setting mtu 1500 for CNI 
	I0817 21:33:43.907172       1 main.go:146] kindnetd IP family: "ipv4"
	I0817 21:33:43.907197       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0817 21:33:44.502875       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:33:44.502931       1 main.go:227] handling current node
	I0817 21:33:54.604884       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:33:54.604953       1 main.go:227] handling current node
	I0817 21:34:04.618903       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:34:04.618964       1 main.go:227] handling current node
	I0817 21:34:14.631752       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:34:14.632139       1 main.go:227] handling current node
	I0817 21:34:24.637135       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:34:24.637231       1 main.go:227] handling current node
	I0817 21:34:24.637261       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0817 21:34:24.637279       1 main.go:250] Node multinode-959371-m02 has CIDR [10.244.1.0/24] 
	I0817 21:34:24.637532       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.175 Flags: [] Table: 0} 
	I0817 21:34:34.651194       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:34:34.651258       1 main.go:227] handling current node
	I0817 21:34:34.651275       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0817 21:34:34.651286       1 main.go:250] Node multinode-959371-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062] <==
	* I0817 21:33:22.876610       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 21:33:22.898215       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0817 21:33:22.905552       1 shared_informer.go:318] Caches are synced for configmaps
	I0817 21:33:22.906196       1 aggregator.go:152] initial CRD sync complete...
	I0817 21:33:22.906249       1 autoregister_controller.go:141] Starting autoregister controller
	I0817 21:33:22.906273       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0817 21:33:22.906296       1 cache.go:39] Caches are synced for autoregister controller
	I0817 21:33:22.912206       1 controller.go:624] quota admission added evaluator for: namespaces
	I0817 21:33:22.953951       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 21:33:23.488718       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 21:33:23.780238       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0817 21:33:23.785399       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0817 21:33:23.785449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0817 21:33:24.618095       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 21:33:24.671888       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 21:33:24.808002       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0817 21:33:24.815285       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.39.104]
	I0817 21:33:24.816168       1 controller.go:624] quota admission added evaluator for: endpoints
	I0817 21:33:24.820867       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 21:33:24.885421       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0817 21:33:26.401742       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0817 21:33:26.436316       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0817 21:33:26.452611       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0817 21:33:38.283195       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0817 21:33:39.116363       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [aac460f2e536d2f68faa81716b2b7d18456f6f2ad149e680a11939702374547a] <==
	* I0817 21:33:38.325566       1 shared_informer.go:318] Caches are synced for resource quota
	I0817 21:33:38.332152       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-959371" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 21:33:38.334556       1 shared_informer.go:318] Caches are synced for resource quota
	I0817 21:33:38.341576       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-959371" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 21:33:38.342202       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-959371" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 21:33:38.691237       1 shared_informer.go:318] Caches are synced for garbage collector
	I0817 21:33:38.721792       1 shared_informer.go:318] Caches are synced for garbage collector
	I0817 21:33:38.721848       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0817 21:33:38.745962       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0817 21:33:39.216334       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-s7l7j"
	I0817 21:33:39.241500       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8gdf7"
	I0817 21:33:39.277435       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-rdkd4"
	I0817 21:33:39.368812       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-87rlb"
	I0817 21:33:39.606132       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-rdkd4"
	I0817 21:33:48.313560       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0817 21:34:24.168347       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959371-m02\" does not exist"
	I0817 21:34:24.197511       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xjn26"
	I0817 21:34:24.211361       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959371-m02" podCIDRs=[10.244.1.0/24]
	I0817 21:34:24.211534       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zmldj"
	I0817 21:34:28.321782       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-959371-m02"
	I0817 21:34:28.321980       1 event.go:307] "Event occurred" object="multinode-959371-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-959371-m02 event: Registered Node multinode-959371-m02 in Controller"
	W0817 21:34:32.651030       1 topologycache.go:232] Can't get CPU or zone information for multinode-959371-m02 node
	I0817 21:34:34.977785       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0817 21:34:34.996233       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-65x2b"
	I0817 21:34:35.029873       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-9c77m"
	
	* 
	* ==> kube-proxy [35d4448d66bec13edb19e2a345e7753b03d93b804125612c6b7c67b497fcb62b] <==
	* I0817 21:33:41.797294       1 node.go:141] Successfully retrieved node IP: 192.168.39.104
	I0817 21:33:41.797393       1 server_others.go:110] "Detected node IP" address="192.168.39.104"
	I0817 21:33:41.797417       1 server_others.go:554] "Using iptables proxy"
	I0817 21:33:41.846989       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0817 21:33:41.847053       1 server_others.go:192] "Using iptables Proxier"
	I0817 21:33:41.847751       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 21:33:41.849168       1 server.go:658] "Version info" version="v1.27.4"
	I0817 21:33:41.849208       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 21:33:41.850752       1 config.go:188] "Starting service config controller"
	I0817 21:33:41.851925       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 21:33:41.852106       1 config.go:315] "Starting node config controller"
	I0817 21:33:41.852115       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 21:33:41.853355       1 config.go:97] "Starting endpoint slice config controller"
	I0817 21:33:41.853481       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 21:33:41.953220       1 shared_informer.go:318] Caches are synced for node config
	I0817 21:33:41.953262       1 shared_informer.go:318] Caches are synced for service config
	I0817 21:33:41.956276       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [03f71205c67a4eef9cbea4c0745f9d527061c25423543c30c8955966ec901662] <==
	* W0817 21:33:23.876753       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 21:33:23.876859       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0817 21:33:23.909505       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 21:33:23.909576       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0817 21:33:23.924174       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 21:33:23.924232       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0817 21:33:23.972825       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 21:33:23.972884       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0817 21:33:23.972944       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 21:33:23.972953       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0817 21:33:24.017953       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 21:33:24.018166       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0817 21:33:24.181973       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 21:33:24.182135       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0817 21:33:24.216863       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 21:33:24.216954       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0817 21:33:24.256239       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 21:33:24.256264       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0817 21:33:24.280719       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 21:33:24.280817       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 21:33:24.307612       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:33:24.307749       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0817 21:33:24.363798       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 21:33:24.363825       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0817 21:33:27.120673       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 21:32:52 UTC, ends at Thu 2023-08-17 21:34:41 UTC. --
	Aug 17 21:33:39 multinode-959371 kubelet[1265]: I0817 21:33:39.357069    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6af177c8-cc30-4a86-98d8-443cef5036d8-xtables-lock\") pod \"kindnet-s7l7j\" (UID: \"6af177c8-cc30-4a86-98d8-443cef5036d8\") " pod="kube-system/kindnet-s7l7j"
	Aug 17 21:33:39 multinode-959371 kubelet[1265]: I0817 21:33:39.357091    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00e6f433-51d6-49bb-a927-780720361eb3-kube-proxy\") pod \"kube-proxy-8gdf7\" (UID: \"00e6f433-51d6-49bb-a927-780720361eb3\") " pod="kube-system/kube-proxy-8gdf7"
	Aug 17 21:33:39 multinode-959371 kubelet[1265]: I0817 21:33:39.357120    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6af177c8-cc30-4a86-98d8-443cef5036d8-lib-modules\") pod \"kindnet-s7l7j\" (UID: \"6af177c8-cc30-4a86-98d8-443cef5036d8\") " pod="kube-system/kindnet-s7l7j"
	Aug 17 21:33:39 multinode-959371 kubelet[1265]: I0817 21:33:39.357139    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dm79\" (UniqueName: \"kubernetes.io/projected/6af177c8-cc30-4a86-98d8-443cef5036d8-kube-api-access-6dm79\") pod \"kindnet-s7l7j\" (UID: \"6af177c8-cc30-4a86-98d8-443cef5036d8\") " pod="kube-system/kindnet-s7l7j"
	Aug 17 21:33:40 multinode-959371 kubelet[1265]: E0817 21:33:40.459762    1265 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Aug 17 21:33:40 multinode-959371 kubelet[1265]: E0817 21:33:40.459973    1265 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/00e6f433-51d6-49bb-a927-780720361eb3-kube-proxy podName:00e6f433-51d6-49bb-a927-780720361eb3 nodeName:}" failed. No retries permitted until 2023-08-17 21:33:40.959946072 +0000 UTC m=+14.591690601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/00e6f433-51d6-49bb-a927-780720361eb3-kube-proxy") pod "kube-proxy-8gdf7" (UID: "00e6f433-51d6-49bb-a927-780720361eb3") : failed to sync configmap cache: timed out waiting for the condition
	Aug 17 21:33:43 multinode-959371 kubelet[1265]: I0817 21:33:43.734971    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8gdf7" podStartSLOduration=4.734864827 podCreationTimestamp="2023-08-17 21:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:33:41.731216833 +0000 UTC m=+15.362961377" watchObservedRunningTime="2023-08-17 21:33:43.734864827 +0000 UTC m=+17.366609363"
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: I0817 21:33:44.824329    1265 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: I0817 21:33:44.859267    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-s7l7j" podStartSLOduration=5.859230892 podCreationTimestamp="2023-08-17 21:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:33:43.736000707 +0000 UTC m=+17.367745251" watchObservedRunningTime="2023-08-17 21:33:44.859230892 +0000 UTC m=+18.490975437"
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: I0817 21:33:44.859538    1265 topology_manager.go:212] "Topology Admit Handler"
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: W0817 21:33:44.864088    1265 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-959371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-959371' and this object
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: E0817 21:33:44.864192    1265 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-959371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-959371' and this object
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: I0817 21:33:44.874822    1265 topology_manager.go:212] "Topology Admit Handler"
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: I0817 21:33:44.896491    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e8aa1192-3588-49da-be88-15a801d006fc-tmp\") pod \"storage-provisioner\" (UID: \"e8aa1192-3588-49da-be88-15a801d006fc\") " pod="kube-system/storage-provisioner"
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: I0817 21:33:44.896557    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52da85e0-72f0-4919-8615-d1cb46b65ca4-config-volume\") pod \"coredns-5d78c9869d-87rlb\" (UID: \"52da85e0-72f0-4919-8615-d1cb46b65ca4\") " pod="kube-system/coredns-5d78c9869d-87rlb"
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: I0817 21:33:44.896583    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b828\" (UniqueName: \"kubernetes.io/projected/52da85e0-72f0-4919-8615-d1cb46b65ca4-kube-api-access-2b828\") pod \"coredns-5d78c9869d-87rlb\" (UID: \"52da85e0-72f0-4919-8615-d1cb46b65ca4\") " pod="kube-system/coredns-5d78c9869d-87rlb"
	Aug 17 21:33:44 multinode-959371 kubelet[1265]: I0817 21:33:44.896613    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmdz\" (UniqueName: \"kubernetes.io/projected/e8aa1192-3588-49da-be88-15a801d006fc-kube-api-access-tjmdz\") pod \"storage-provisioner\" (UID: \"e8aa1192-3588-49da-be88-15a801d006fc\") " pod="kube-system/storage-provisioner"
	Aug 17 21:33:46 multinode-959371 kubelet[1265]: I0817 21:33:46.605269    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.605232054 podCreationTimestamp="2023-08-17 21:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:33:45.753459525 +0000 UTC m=+19.385204069" watchObservedRunningTime="2023-08-17 21:33:46.605232054 +0000 UTC m=+20.236976599"
	Aug 17 21:33:46 multinode-959371 kubelet[1265]: I0817 21:33:46.775575    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-87rlb" podStartSLOduration=7.775539572 podCreationTimestamp="2023-08-17 21:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:33:46.751044444 +0000 UTC m=+20.382788989" watchObservedRunningTime="2023-08-17 21:33:46.775539572 +0000 UTC m=+20.407284116"
	Aug 17 21:34:26 multinode-959371 kubelet[1265]: E0817 21:34:26.614309    1265 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 21:34:26 multinode-959371 kubelet[1265]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 21:34:26 multinode-959371 kubelet[1265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 21:34:26 multinode-959371 kubelet[1265]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 21:34:35 multinode-959371 kubelet[1265]: I0817 21:34:35.048536    1265 topology_manager.go:212] "Topology Admit Handler"
	Aug 17 21:34:35 multinode-959371 kubelet[1265]: I0817 21:34:35.098440    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r66m7\" (UniqueName: \"kubernetes.io/projected/4b9baf46-70e7-4d95-b774-9c12c6970154-kube-api-access-r66m7\") pod \"busybox-67b7f59bb-9c77m\" (UID: \"4b9baf46-70e7-4d95-b774-9c12c6970154\") " pod="default/busybox-67b7f59bb-9c77m"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-959371 -n multinode-959371
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-959371 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (684.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-959371
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-959371
E0817 21:37:07.553392  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-959371: exit status 82 (2m0.858583377s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-959371"  ...
	* Stopping node "multinode-959371"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-959371" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959371 --wait=true -v=8 --alsologtostderr
E0817 21:38:09.344937  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:39:32.391065  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:40:31.665171  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:42:07.553662  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:43:09.344791  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:43:30.599686  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:45:31.666345  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:46:54.712496  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:47:07.553832  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-959371 --wait=true -v=8 --alsologtostderr: (9m20.845407235s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-959371
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-959371 -n multinode-959371
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-959371 logs -n 25: (1.694573225s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959371 cp multinode-959371-m02:/home/docker/cp-test.txt                       | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1606715842/001/cp-test_multinode-959371-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959371 cp multinode-959371-m02:/home/docker/cp-test.txt                       | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371:/home/docker/cp-test_multinode-959371-m02_multinode-959371.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n multinode-959371 sudo cat                                       | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | /home/docker/cp-test_multinode-959371-m02_multinode-959371.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-959371 cp multinode-959371-m02:/home/docker/cp-test.txt                       | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m03:/home/docker/cp-test_multinode-959371-m02_multinode-959371-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n multinode-959371-m03 sudo cat                                   | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | /home/docker/cp-test_multinode-959371-m02_multinode-959371-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-959371 cp testdata/cp-test.txt                                                | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959371 cp multinode-959371-m03:/home/docker/cp-test.txt                       | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1606715842/001/cp-test_multinode-959371-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959371 cp multinode-959371-m03:/home/docker/cp-test.txt                       | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371:/home/docker/cp-test_multinode-959371-m03_multinode-959371.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n multinode-959371 sudo cat                                       | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | /home/docker/cp-test_multinode-959371-m03_multinode-959371.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-959371 cp multinode-959371-m03:/home/docker/cp-test.txt                       | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m02:/home/docker/cp-test_multinode-959371-m03_multinode-959371-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n multinode-959371-m02 sudo cat                                   | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | /home/docker/cp-test_multinode-959371-m03_multinode-959371-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-959371 node stop m03                                                          | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	| node    | multinode-959371 node start                                                             | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:36 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-959371                                                                | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:36 UTC |                     |
	| stop    | -p multinode-959371                                                                     | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:36 UTC |                     |
	| start   | -p multinode-959371                                                                     | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:38 UTC | 17 Aug 23 21:47 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-959371                                                                | multinode-959371 | jenkins | v1.31.2 | 17 Aug 23 21:47 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:38:07
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:38:07.592499  226555 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:38:07.592618  226555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:38:07.592647  226555 out.go:309] Setting ErrFile to fd 2...
	I0817 21:38:07.592652  226555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:38:07.592852  226555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 21:38:07.593467  226555 out.go:303] Setting JSON to false
	I0817 21:38:07.594390  226555 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":22813,"bootTime":1692285475,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:38:07.594484  226555 start.go:138] virtualization: kvm guest
	I0817 21:38:07.597109  226555 out.go:177] * [multinode-959371] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:38:07.599139  226555 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:38:07.600490  226555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:38:07.599228  226555 notify.go:220] Checking for updates...
	I0817 21:38:07.602174  226555 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:38:07.603660  226555 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:38:07.605027  226555 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:38:07.606720  226555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:38:07.608803  226555 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:38:07.608907  226555 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:38:07.609284  226555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:38:07.609342  226555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:38:07.624603  226555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0817 21:38:07.625017  226555 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:38:07.625618  226555 main.go:141] libmachine: Using API Version  1
	I0817 21:38:07.625639  226555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:38:07.626031  226555 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:38:07.626250  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:38:07.663168  226555 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 21:38:07.665237  226555 start.go:298] selected driver: kvm2
	I0817 21:38:07.665267  226555 start.go:902] validating driver "kvm2" against &{Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:mu
ltinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspek
tor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:38:07.665527  226555 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:38:07.665893  226555 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:38:07.665987  226555 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 21:38:07.681885  226555 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 21:38:07.682610  226555 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:38:07.682701  226555 cni.go:84] Creating CNI manager for ""
	I0817 21:38:07.682713  226555 cni.go:136] 3 nodes found, recommending kindnet
	I0817 21:38:07.682723  226555 start_flags.go:319] config:
	{Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:38:07.683006  226555 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:38:07.685315  226555 out.go:177] * Starting control plane node multinode-959371 in cluster multinode-959371
	I0817 21:38:07.686876  226555 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:38:07.686928  226555 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 21:38:07.686941  226555 cache.go:57] Caching tarball of preloaded images
	I0817 21:38:07.687036  226555 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 21:38:07.687048  226555 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:38:07.687209  226555 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:38:07.687431  226555 start.go:365] acquiring machines lock for multinode-959371: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 21:38:07.687488  226555 start.go:369] acquired machines lock for "multinode-959371" in 33.679µs
	I0817 21:38:07.687509  226555 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:38:07.687521  226555 fix.go:54] fixHost starting: 
	I0817 21:38:07.687839  226555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:38:07.687884  226555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:38:07.702422  226555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40787
	I0817 21:38:07.702876  226555 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:38:07.703397  226555 main.go:141] libmachine: Using API Version  1
	I0817 21:38:07.703417  226555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:38:07.703799  226555 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:38:07.704006  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:38:07.704183  226555 main.go:141] libmachine: (multinode-959371) Calling .GetState
	I0817 21:38:07.705760  226555 fix.go:102] recreateIfNeeded on multinode-959371: state=Running err=<nil>
	W0817 21:38:07.705817  226555 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:38:07.709130  226555 out.go:177] * Updating the running kvm2 "multinode-959371" VM ...
	I0817 21:38:07.710912  226555 machine.go:88] provisioning docker machine ...
	I0817 21:38:07.710943  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:38:07.711214  226555 main.go:141] libmachine: (multinode-959371) Calling .GetMachineName
	I0817 21:38:07.711419  226555 buildroot.go:166] provisioning hostname "multinode-959371"
	I0817 21:38:07.711441  226555 main.go:141] libmachine: (multinode-959371) Calling .GetMachineName
	I0817 21:38:07.711570  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:38:07.714216  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:38:07.714732  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:38:07.714782  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:38:07.714882  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:38:07.715125  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:38:07.715320  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:38:07.715516  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:38:07.715697  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:38:07.716107  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:38:07.716126  226555 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-959371 && echo "multinode-959371" | sudo tee /etc/hostname
	I0817 21:38:26.238356  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:38:32.318436  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:38:35.390400  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:38:41.470546  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:38:44.542314  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:38:50.622385  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:38:53.694391  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:38:59.774402  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:02.846381  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:08.926358  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:11.998375  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:18.078451  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:21.150447  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:27.230375  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:30.302458  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:36.382408  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:39.454355  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:45.534401  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:48.606333  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:54.686386  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:39:57.758444  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:03.838340  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:06.910301  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:12.990440  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:16.062356  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:22.142390  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:25.214427  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:31.294384  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:34.366405  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:40.446416  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:43.518387  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:49.598398  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:52.670483  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:40:58.750412  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:01.822427  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:07.902446  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:10.974359  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:17.054388  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:20.126396  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:26.206463  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:29.278387  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:35.358384  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:38.430512  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:44.510395  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:47.582326  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:53.662426  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:41:56.734323  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:02.814394  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:05.886391  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:11.966438  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:15.038418  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:21.118454  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:24.190497  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:30.270381  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:33.342419  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:39.422404  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:42.494355  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:48.574360  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:51.646362  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:42:57.726359  226555 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.104:22: connect: no route to host
	I0817 21:43:00.729219  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:43:00.729262  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:00.731608  226555 machine.go:91] provisioned docker machine in 4m53.020669311s
	I0817 21:43:00.731659  226555 fix.go:56] fixHost completed within 4m53.044139365s
	I0817 21:43:00.731669  226555 start.go:83] releasing machines lock for "multinode-959371", held for 4m53.04416861s
	W0817 21:43:00.731699  226555 start.go:672] error starting host: provision: host is not running
	W0817 21:43:00.731879  226555 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0817 21:43:00.731889  226555 start.go:687] Will try again in 5 seconds ...
	I0817 21:43:05.735051  226555 start.go:365] acquiring machines lock for multinode-959371: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 21:43:05.735189  226555 start.go:369] acquired machines lock for "multinode-959371" in 87.36µs
	I0817 21:43:05.735211  226555 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:43:05.735220  226555 fix.go:54] fixHost starting: 
	I0817 21:43:05.735572  226555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:43:05.735604  226555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:43:05.751653  226555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37729
	I0817 21:43:05.752218  226555 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:43:05.752849  226555 main.go:141] libmachine: Using API Version  1
	I0817 21:43:05.752874  226555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:43:05.753253  226555 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:43:05.753461  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:43:05.753618  226555 main.go:141] libmachine: (multinode-959371) Calling .GetState
	I0817 21:43:05.755401  226555 fix.go:102] recreateIfNeeded on multinode-959371: state=Stopped err=<nil>
	I0817 21:43:05.755430  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	W0817 21:43:05.755590  226555 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:43:05.759044  226555 out.go:177] * Restarting existing kvm2 VM for "multinode-959371" ...
	I0817 21:43:05.760815  226555 main.go:141] libmachine: (multinode-959371) Calling .Start
	I0817 21:43:05.761050  226555 main.go:141] libmachine: (multinode-959371) Ensuring networks are active...
	I0817 21:43:05.761932  226555 main.go:141] libmachine: (multinode-959371) Ensuring network default is active
	I0817 21:43:05.762280  226555 main.go:141] libmachine: (multinode-959371) Ensuring network mk-multinode-959371 is active
	I0817 21:43:05.762605  226555 main.go:141] libmachine: (multinode-959371) Getting domain xml...
	I0817 21:43:05.763318  226555 main.go:141] libmachine: (multinode-959371) Creating domain...
	I0817 21:43:07.019448  226555 main.go:141] libmachine: (multinode-959371) Waiting to get IP...
	I0817 21:43:07.020424  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:07.020913  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:07.021027  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:07.020888  227390 retry.go:31] will retry after 275.815424ms: waiting for machine to come up
	I0817 21:43:07.298522  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:07.299060  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:07.299082  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:07.299012  227390 retry.go:31] will retry after 382.227297ms: waiting for machine to come up
	I0817 21:43:07.682728  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:07.683224  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:07.683270  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:07.683199  227390 retry.go:31] will retry after 348.277277ms: waiting for machine to come up
	I0817 21:43:08.032832  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:08.033332  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:08.033355  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:08.033264  227390 retry.go:31] will retry after 491.010612ms: waiting for machine to come up
	I0817 21:43:08.526034  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:08.526603  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:08.526631  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:08.526562  227390 retry.go:31] will retry after 713.655038ms: waiting for machine to come up
	I0817 21:43:09.241701  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:09.242259  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:09.242291  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:09.242201  227390 retry.go:31] will retry after 632.331666ms: waiting for machine to come up
	I0817 21:43:09.876299  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:09.876798  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:09.876830  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:09.876755  227390 retry.go:31] will retry after 926.191855ms: waiting for machine to come up
	I0817 21:43:10.804524  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:10.804986  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:10.805010  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:10.804953  227390 retry.go:31] will retry after 1.193844139s: waiting for machine to come up
	I0817 21:43:12.000448  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:12.000928  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:12.000957  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:12.000879  227390 retry.go:31] will retry after 1.705543186s: waiting for machine to come up
	I0817 21:43:13.707673  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:13.708235  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:13.708270  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:13.708182  227390 retry.go:31] will retry after 1.821279408s: waiting for machine to come up
	I0817 21:43:15.532450  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:15.532894  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:15.532919  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:15.532844  227390 retry.go:31] will retry after 2.720347034s: waiting for machine to come up
	I0817 21:43:18.255113  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:18.255624  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:18.255652  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:18.255551  227390 retry.go:31] will retry after 2.439049663s: waiting for machine to come up
	I0817 21:43:20.698276  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:20.698647  226555 main.go:141] libmachine: (multinode-959371) DBG | unable to find current IP address of domain multinode-959371 in network mk-multinode-959371
	I0817 21:43:20.698678  226555 main.go:141] libmachine: (multinode-959371) DBG | I0817 21:43:20.698605  227390 retry.go:31] will retry after 3.246660757s: waiting for machine to come up
	I0817 21:43:23.949470  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:23.949994  226555 main.go:141] libmachine: (multinode-959371) Found IP for machine: 192.168.39.104
	I0817 21:43:23.950024  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has current primary IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:23.950035  226555 main.go:141] libmachine: (multinode-959371) Reserving static IP address...
	I0817 21:43:23.950494  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "multinode-959371", mac: "52:54:00:b5:61:ee", ip: "192.168.39.104"} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:23.950540  226555 main.go:141] libmachine: (multinode-959371) DBG | skip adding static IP to network mk-multinode-959371 - found existing host DHCP lease matching {name: "multinode-959371", mac: "52:54:00:b5:61:ee", ip: "192.168.39.104"}
	I0817 21:43:23.950552  226555 main.go:141] libmachine: (multinode-959371) Reserved static IP address: 192.168.39.104
	I0817 21:43:23.950572  226555 main.go:141] libmachine: (multinode-959371) Waiting for SSH to be available...
	I0817 21:43:23.950582  226555 main.go:141] libmachine: (multinode-959371) DBG | Getting to WaitForSSH function...
	I0817 21:43:23.952671  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:23.953045  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:23.953078  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:23.953271  226555 main.go:141] libmachine: (multinode-959371) DBG | Using SSH client type: external
	I0817 21:43:23.953313  226555 main.go:141] libmachine: (multinode-959371) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa (-rw-------)
	I0817 21:43:23.953364  226555 main.go:141] libmachine: (multinode-959371) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 21:43:23.953389  226555 main.go:141] libmachine: (multinode-959371) DBG | About to run SSH command:
	I0817 21:43:23.953398  226555 main.go:141] libmachine: (multinode-959371) DBG | exit 0
	I0817 21:43:24.041870  226555 main.go:141] libmachine: (multinode-959371) DBG | SSH cmd err, output: <nil>: 
	I0817 21:43:24.042277  226555 main.go:141] libmachine: (multinode-959371) Calling .GetConfigRaw
	I0817 21:43:24.043023  226555 main.go:141] libmachine: (multinode-959371) Calling .GetIP
	I0817 21:43:24.045548  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.045952  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:24.045986  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.046389  226555 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:43:24.046605  226555 machine.go:88] provisioning docker machine ...
	I0817 21:43:24.046630  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:43:24.046877  226555 main.go:141] libmachine: (multinode-959371) Calling .GetMachineName
	I0817 21:43:24.047051  226555 buildroot.go:166] provisioning hostname "multinode-959371"
	I0817 21:43:24.047074  226555 main.go:141] libmachine: (multinode-959371) Calling .GetMachineName
	I0817 21:43:24.047223  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:24.049454  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.049785  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:24.049816  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.049921  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:43:24.050145  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:24.050324  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:24.050462  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:43:24.050599  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:43:24.051083  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:43:24.051099  226555 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-959371 && echo "multinode-959371" | sudo tee /etc/hostname
	I0817 21:43:24.183686  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-959371
	
	I0817 21:43:24.183722  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:24.186320  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.186741  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:24.186775  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.186919  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:43:24.187136  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:24.187325  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:24.187447  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:43:24.187584  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:43:24.188126  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:43:24.188159  226555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-959371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-959371/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-959371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:43:24.311664  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:43:24.311697  226555 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 21:43:24.311739  226555 buildroot.go:174] setting up certificates
	I0817 21:43:24.311749  226555 provision.go:83] configureAuth start
	I0817 21:43:24.311760  226555 main.go:141] libmachine: (multinode-959371) Calling .GetMachineName
	I0817 21:43:24.312139  226555 main.go:141] libmachine: (multinode-959371) Calling .GetIP
	I0817 21:43:24.314975  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.315333  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:24.315358  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.315508  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:24.317541  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.317871  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:24.317915  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.318068  226555 provision.go:138] copyHostCerts
	I0817 21:43:24.318115  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:43:24.318159  226555 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 21:43:24.318198  226555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:43:24.318281  226555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 21:43:24.318361  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:43:24.318381  226555 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 21:43:24.318388  226555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:43:24.318411  226555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 21:43:24.318459  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:43:24.318474  226555 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 21:43:24.318480  226555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:43:24.318501  226555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 21:43:24.318546  226555 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.multinode-959371 san=[192.168.39.104 192.168.39.104 localhost 127.0.0.1 minikube multinode-959371]
	I0817 21:43:24.641226  226555 provision.go:172] copyRemoteCerts
	I0817 21:43:24.641292  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:43:24.641319  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:24.643808  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.644228  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:24.644266  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.644407  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:43:24.644605  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:24.644823  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:43:24.644968  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:43:24.731337  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:43:24.731429  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 21:43:24.755508  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:43:24.755580  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0817 21:43:24.779635  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:43:24.779715  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:43:24.804019  226555 provision.go:86] duration metric: configureAuth took 492.256549ms
	I0817 21:43:24.804059  226555 buildroot.go:189] setting minikube options for container-runtime
	I0817 21:43:24.804418  226555 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:43:24.804520  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:24.807441  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.807819  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:24.807847  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:24.808074  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:43:24.808334  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:24.808510  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:24.808632  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:43:24.808798  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:43:24.809400  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:43:24.809425  226555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:43:25.109026  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:43:25.109058  226555 machine.go:91] provisioned docker machine in 1.06243813s
	I0817 21:43:25.109071  226555 start.go:300] post-start starting for "multinode-959371" (driver="kvm2")
	I0817 21:43:25.109085  226555 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:43:25.109105  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:43:25.109445  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:43:25.109477  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:25.112430  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.112850  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:25.112922  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.113103  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:43:25.113347  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:25.113536  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:43:25.113735  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:43:25.200513  226555 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:43:25.205006  226555 command_runner.go:130] > NAME=Buildroot
	I0817 21:43:25.205040  226555 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0817 21:43:25.205045  226555 command_runner.go:130] > ID=buildroot
	I0817 21:43:25.205051  226555 command_runner.go:130] > VERSION_ID=2021.02.12
	I0817 21:43:25.205056  226555 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0817 21:43:25.205186  226555 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 21:43:25.205209  226555 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 21:43:25.205291  226555 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 21:43:25.205387  226555 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 21:43:25.205404  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /etc/ssl/certs/2106702.pem
	I0817 21:43:25.205513  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:43:25.214484  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:43:25.240029  226555 start.go:303] post-start completed in 130.921985ms
	I0817 21:43:25.240058  226555 fix.go:56] fixHost completed within 19.504838547s
	I0817 21:43:25.240118  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:25.243246  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.243640  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:25.243704  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.243928  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:43:25.244164  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:25.244374  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:25.244541  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:43:25.244721  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:43:25.245133  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0817 21:43:25.245147  226555 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 21:43:25.359191  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692308605.306105413
	
	I0817 21:43:25.359220  226555 fix.go:206] guest clock: 1692308605.306105413
	I0817 21:43:25.359231  226555 fix.go:219] Guest: 2023-08-17 21:43:25.306105413 +0000 UTC Remote: 2023-08-17 21:43:25.240062521 +0000 UTC m=+317.685819022 (delta=66.042892ms)
	I0817 21:43:25.359289  226555 fix.go:190] guest clock delta is within tolerance: 66.042892ms
	I0817 21:43:25.359294  226555 start.go:83] releasing machines lock for "multinode-959371", held for 19.624096435s
	I0817 21:43:25.359330  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:43:25.359614  226555 main.go:141] libmachine: (multinode-959371) Calling .GetIP
	I0817 21:43:25.362479  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.362893  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:25.362931  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.363070  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:43:25.363625  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:43:25.363826  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:43:25.363906  226555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:43:25.363950  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:25.364027  226555 ssh_runner.go:195] Run: cat /version.json
	I0817 21:43:25.364054  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:43:25.366557  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.366823  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.366955  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:25.366985  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.367100  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:43:25.367305  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:25.367311  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:25.367343  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:25.367541  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:43:25.367570  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:43:25.367725  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:43:25.367719  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:43:25.367861  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:43:25.367991  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:43:25.482741  226555 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0817 21:43:25.482799  226555 command_runner.go:130] > {"iso_version": "v1.31.0", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "be0194f682c2c37366eacb8c13503cb6c7a41cf8"}
	I0817 21:43:25.483084  226555 ssh_runner.go:195] Run: systemctl --version
	I0817 21:43:25.488859  226555 command_runner.go:130] > systemd 247 (247)
	I0817 21:43:25.488895  226555 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0817 21:43:25.488970  226555 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:43:25.639797  226555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:43:25.646765  226555 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0817 21:43:25.646994  226555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 21:43:25.647088  226555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:43:25.661931  226555 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0817 21:43:25.662288  226555 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 21:43:25.662312  226555 start.go:466] detecting cgroup driver to use...
	I0817 21:43:25.662420  226555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:43:25.679457  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:43:25.692319  226555 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:43:25.692396  226555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:43:25.705249  226555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:43:25.718719  226555 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:43:25.733069  226555 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0817 21:43:25.821232  226555 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:43:25.835662  226555 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0817 21:43:25.936812  226555 docker.go:212] disabling docker service ...
	I0817 21:43:25.936906  226555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:43:25.951296  226555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:43:25.963051  226555 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0817 21:43:25.963630  226555 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:43:25.978119  226555 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0817 21:43:26.073915  226555 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:43:26.086922  226555 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0817 21:43:26.087191  226555 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0817 21:43:26.182895  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:43:26.195545  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:43:26.213341  226555 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0817 21:43:26.213409  226555 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 21:43:26.213488  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:43:26.222594  226555 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:43:26.222679  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:43:26.231638  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:43:26.241443  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:43:26.250909  226555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:43:26.260811  226555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:43:26.268980  226555 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 21:43:26.269034  226555 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 21:43:26.269083  226555 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 21:43:26.281251  226555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:43:26.291267  226555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:43:26.392188  226555 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:43:26.572784  226555 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:43:26.572873  226555 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:43:26.578148  226555 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0817 21:43:26.578199  226555 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0817 21:43:26.578206  226555 command_runner.go:130] > Device: 16h/22d	Inode: 724         Links: 1
	I0817 21:43:26.578213  226555 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:43:26.578218  226555 command_runner.go:130] > Access: 2023-08-17 21:43:26.503579600 +0000
	I0817 21:43:26.578224  226555 command_runner.go:130] > Modify: 2023-08-17 21:43:26.503579600 +0000
	I0817 21:43:26.578229  226555 command_runner.go:130] > Change: 2023-08-17 21:43:26.503579600 +0000
	I0817 21:43:26.578232  226555 command_runner.go:130] >  Birth: -
	I0817 21:43:26.578253  226555 start.go:534] Will wait 60s for crictl version
	I0817 21:43:26.578320  226555 ssh_runner.go:195] Run: which crictl
	I0817 21:43:26.581925  226555 command_runner.go:130] > /usr/bin/crictl
	I0817 21:43:26.582081  226555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:43:26.616883  226555 command_runner.go:130] > Version:  0.1.0
	I0817 21:43:26.616905  226555 command_runner.go:130] > RuntimeName:  cri-o
	I0817 21:43:26.616910  226555 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0817 21:43:26.616915  226555 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0817 21:43:26.616934  226555 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 21:43:26.616998  226555 ssh_runner.go:195] Run: crio --version
	I0817 21:43:26.662373  226555 command_runner.go:130] > crio version 1.24.1
	I0817 21:43:26.662397  226555 command_runner.go:130] > Version:          1.24.1
	I0817 21:43:26.662404  226555 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:43:26.662409  226555 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:43:26.662414  226555 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:43:26.662419  226555 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:43:26.662423  226555 command_runner.go:130] > Compiler:         gc
	I0817 21:43:26.662428  226555 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:43:26.662441  226555 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:43:26.662448  226555 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:43:26.662453  226555 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:43:26.662457  226555 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:43:26.663903  226555 ssh_runner.go:195] Run: crio --version
	I0817 21:43:26.708443  226555 command_runner.go:130] > crio version 1.24.1
	I0817 21:43:26.708490  226555 command_runner.go:130] > Version:          1.24.1
	I0817 21:43:26.708500  226555 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:43:26.708507  226555 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:43:26.708518  226555 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:43:26.708526  226555 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:43:26.708537  226555 command_runner.go:130] > Compiler:         gc
	I0817 21:43:26.708547  226555 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:43:26.708559  226555 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:43:26.708572  226555 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:43:26.708580  226555 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:43:26.708584  226555 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:43:26.713648  226555 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 21:43:26.715833  226555 main.go:141] libmachine: (multinode-959371) Calling .GetIP
	I0817 21:43:26.718595  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:26.719055  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:43:26.719216  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:43:26.719311  226555 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 21:43:26.723822  226555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:43:26.736249  226555 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:43:26.736341  226555 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:43:26.767881  226555 command_runner.go:130] > {
	I0817 21:43:26.767910  226555 command_runner.go:130] >   "images": [
	I0817 21:43:26.767916  226555 command_runner.go:130] >     {
	I0817 21:43:26.767927  226555 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0817 21:43:26.767933  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:26.767942  226555 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0817 21:43:26.767948  226555 command_runner.go:130] >       ],
	I0817 21:43:26.767955  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:26.767989  226555 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0817 21:43:26.768009  226555 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0817 21:43:26.768013  226555 command_runner.go:130] >       ],
	I0817 21:43:26.768017  226555 command_runner.go:130] >       "size": "750414",
	I0817 21:43:26.768022  226555 command_runner.go:130] >       "uid": {
	I0817 21:43:26.768026  226555 command_runner.go:130] >         "value": "65535"
	I0817 21:43:26.768034  226555 command_runner.go:130] >       },
	I0817 21:43:26.768043  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:26.768060  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:26.768069  226555 command_runner.go:130] >     }
	I0817 21:43:26.768075  226555 command_runner.go:130] >   ]
	I0817 21:43:26.768081  226555 command_runner.go:130] > }
	I0817 21:43:26.769321  226555 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 21:43:26.769400  226555 ssh_runner.go:195] Run: which lz4
	I0817 21:43:26.773668  226555 command_runner.go:130] > /usr/bin/lz4
	I0817 21:43:26.773700  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0817 21:43:26.773804  226555 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 21:43:26.778199  226555 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 21:43:26.778242  226555 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 21:43:26.778269  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 21:43:28.704157  226555 crio.go:444] Took 1.930389 seconds to copy over tarball
	I0817 21:43:28.704270  226555 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 21:43:31.580788  226555 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.876481234s)
	I0817 21:43:31.580821  226555 crio.go:451] Took 2.876633 seconds to extract the tarball
	I0817 21:43:31.580836  226555 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 21:43:31.621620  226555 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:43:31.667029  226555 command_runner.go:130] > {
	I0817 21:43:31.667054  226555 command_runner.go:130] >   "images": [
	I0817 21:43:31.667059  226555 command_runner.go:130] >     {
	I0817 21:43:31.667067  226555 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0817 21:43:31.667071  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:31.667077  226555 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0817 21:43:31.667081  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667084  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:31.667093  226555 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0817 21:43:31.667100  226555 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0817 21:43:31.667103  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667108  226555 command_runner.go:130] >       "size": "65249302",
	I0817 21:43:31.667118  226555 command_runner.go:130] >       "uid": null,
	I0817 21:43:31.667125  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:31.667130  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:31.667133  226555 command_runner.go:130] >     },
	I0817 21:43:31.667137  226555 command_runner.go:130] >     {
	I0817 21:43:31.667143  226555 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0817 21:43:31.667148  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:31.667158  226555 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0817 21:43:31.667163  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667167  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:31.667174  226555 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0817 21:43:31.667184  226555 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0817 21:43:31.667187  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667191  226555 command_runner.go:130] >       "size": "31470524",
	I0817 21:43:31.667195  226555 command_runner.go:130] >       "uid": null,
	I0817 21:43:31.667211  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:31.667215  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:31.667218  226555 command_runner.go:130] >     },
	I0817 21:43:31.667222  226555 command_runner.go:130] >     {
	I0817 21:43:31.667228  226555 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0817 21:43:31.667231  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:31.667236  226555 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0817 21:43:31.667240  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667244  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:31.667251  226555 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0817 21:43:31.667272  226555 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0817 21:43:31.667277  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667280  226555 command_runner.go:130] >       "size": "53621675",
	I0817 21:43:31.667284  226555 command_runner.go:130] >       "uid": null,
	I0817 21:43:31.667288  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:31.667291  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:31.667297  226555 command_runner.go:130] >     },
	I0817 21:43:31.667304  226555 command_runner.go:130] >     {
	I0817 21:43:31.667309  226555 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0817 21:43:31.667313  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:31.667318  226555 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0817 21:43:31.667323  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667326  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:31.667342  226555 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0817 21:43:31.667356  226555 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0817 21:43:31.667360  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667370  226555 command_runner.go:130] >       "size": "297083935",
	I0817 21:43:31.667376  226555 command_runner.go:130] >       "uid": {
	I0817 21:43:31.667389  226555 command_runner.go:130] >         "value": "0"
	I0817 21:43:31.667410  226555 command_runner.go:130] >       },
	I0817 21:43:31.667419  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:31.667424  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:31.667429  226555 command_runner.go:130] >     },
	I0817 21:43:31.667433  226555 command_runner.go:130] >     {
	I0817 21:43:31.667439  226555 command_runner.go:130] >       "id": "e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c",
	I0817 21:43:31.667445  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:31.667450  226555 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0817 21:43:31.667456  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667460  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:31.667472  226555 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0817 21:43:31.667488  226555 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"
	I0817 21:43:31.667498  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667505  226555 command_runner.go:130] >       "size": "122078160",
	I0817 21:43:31.667514  226555 command_runner.go:130] >       "uid": {
	I0817 21:43:31.667519  226555 command_runner.go:130] >         "value": "0"
	I0817 21:43:31.667525  226555 command_runner.go:130] >       },
	I0817 21:43:31.667532  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:31.667538  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:31.667542  226555 command_runner.go:130] >     },
	I0817 21:43:31.667545  226555 command_runner.go:130] >     {
	I0817 21:43:31.667551  226555 command_runner.go:130] >       "id": "f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5",
	I0817 21:43:31.667557  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:31.667571  226555 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0817 21:43:31.667580  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667590  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:31.667607  226555 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0817 21:43:31.667619  226555 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"
	I0817 21:43:31.667642  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667649  226555 command_runner.go:130] >       "size": "113931062",
	I0817 21:43:31.667653  226555 command_runner.go:130] >       "uid": {
	I0817 21:43:31.667659  226555 command_runner.go:130] >         "value": "0"
	I0817 21:43:31.667668  226555 command_runner.go:130] >       },
	I0817 21:43:31.667679  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:31.667689  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:31.667702  226555 command_runner.go:130] >     },
	I0817 21:43:31.667711  226555 command_runner.go:130] >     {
	I0817 21:43:31.667725  226555 command_runner.go:130] >       "id": "6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4",
	I0817 21:43:31.667733  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:31.667738  226555 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0817 21:43:31.667744  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667751  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:31.667767  226555 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0817 21:43:31.667783  226555 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"
	I0817 21:43:31.667792  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667803  226555 command_runner.go:130] >       "size": "72714135",
	I0817 21:43:31.667812  226555 command_runner.go:130] >       "uid": null,
	I0817 21:43:31.667822  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:31.667828  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:31.667834  226555 command_runner.go:130] >     },
	I0817 21:43:31.667840  226555 command_runner.go:130] >     {
	I0817 21:43:31.667853  226555 command_runner.go:130] >       "id": "98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16",
	I0817 21:43:31.667864  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:31.667878  226555 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0817 21:43:31.667887  226555 command_runner.go:130] >       ],
	I0817 21:43:31.667897  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:31.667912  226555 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af",
	I0817 21:43:31.667987  226555 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"
	I0817 21:43:31.667998  226555 command_runner.go:130] >       ],
	I0817 21:43:31.668005  226555 command_runner.go:130] >       "size": "59814710",
	I0817 21:43:31.668010  226555 command_runner.go:130] >       "uid": {
	I0817 21:43:31.668014  226555 command_runner.go:130] >         "value": "0"
	I0817 21:43:31.668020  226555 command_runner.go:130] >       },
	I0817 21:43:31.668027  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:31.668033  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:31.668044  226555 command_runner.go:130] >     },
	I0817 21:43:31.668052  226555 command_runner.go:130] >     {
	I0817 21:43:31.668064  226555 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0817 21:43:31.668073  226555 command_runner.go:130] >       "repoTags": [
	I0817 21:43:31.668084  226555 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0817 21:43:31.668093  226555 command_runner.go:130] >       ],
	I0817 21:43:31.668103  226555 command_runner.go:130] >       "repoDigests": [
	I0817 21:43:31.668118  226555 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0817 21:43:31.668130  226555 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0817 21:43:31.668140  226555 command_runner.go:130] >       ],
	I0817 21:43:31.668148  226555 command_runner.go:130] >       "size": "750414",
	I0817 21:43:31.668157  226555 command_runner.go:130] >       "uid": {
	I0817 21:43:31.668164  226555 command_runner.go:130] >         "value": "65535"
	I0817 21:43:31.668173  226555 command_runner.go:130] >       },
	I0817 21:43:31.668180  226555 command_runner.go:130] >       "username": "",
	I0817 21:43:31.668190  226555 command_runner.go:130] >       "spec": null
	I0817 21:43:31.668198  226555 command_runner.go:130] >     }
	I0817 21:43:31.668203  226555 command_runner.go:130] >   ]
	I0817 21:43:31.668211  226555 command_runner.go:130] > }
	I0817 21:43:31.669303  226555 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 21:43:31.669327  226555 cache_images.go:84] Images are preloaded, skipping loading
	I0817 21:43:31.669416  226555 ssh_runner.go:195] Run: crio config
	I0817 21:43:31.720405  226555 command_runner.go:130] ! time="2023-08-17 21:43:31.666894243Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0817 21:43:31.720446  226555 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0817 21:43:31.728677  226555 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0817 21:43:31.728720  226555 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0817 21:43:31.728731  226555 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0817 21:43:31.728737  226555 command_runner.go:130] > #
	I0817 21:43:31.728747  226555 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0817 21:43:31.728756  226555 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0817 21:43:31.728765  226555 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0817 21:43:31.728781  226555 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0817 21:43:31.728795  226555 command_runner.go:130] > # reload'.
	I0817 21:43:31.728804  226555 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0817 21:43:31.728818  226555 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0817 21:43:31.728832  226555 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0817 21:43:31.728844  226555 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0817 21:43:31.728852  226555 command_runner.go:130] > [crio]
	I0817 21:43:31.728861  226555 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0817 21:43:31.728867  226555 command_runner.go:130] > # containers images, in this directory.
	I0817 21:43:31.728879  226555 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0817 21:43:31.728891  226555 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0817 21:43:31.728898  226555 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0817 21:43:31.728904  226555 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0817 21:43:31.728910  226555 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0817 21:43:31.728917  226555 command_runner.go:130] > storage_driver = "overlay"
	I0817 21:43:31.728922  226555 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0817 21:43:31.728930  226555 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0817 21:43:31.728937  226555 command_runner.go:130] > storage_option = [
	I0817 21:43:31.728942  226555 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0817 21:43:31.728948  226555 command_runner.go:130] > ]
	I0817 21:43:31.728954  226555 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0817 21:43:31.728962  226555 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0817 21:43:31.728968  226555 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0817 21:43:31.728974  226555 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0817 21:43:31.728983  226555 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0817 21:43:31.728990  226555 command_runner.go:130] > # always happen on a node reboot
	I0817 21:43:31.728995  226555 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0817 21:43:31.729005  226555 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0817 21:43:31.729013  226555 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0817 21:43:31.729027  226555 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0817 21:43:31.729034  226555 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0817 21:43:31.729044  226555 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0817 21:43:31.729054  226555 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0817 21:43:31.729060  226555 command_runner.go:130] > # internal_wipe = true
	I0817 21:43:31.729066  226555 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0817 21:43:31.729073  226555 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0817 21:43:31.729079  226555 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0817 21:43:31.729088  226555 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0817 21:43:31.729096  226555 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0817 21:43:31.729101  226555 command_runner.go:130] > [crio.api]
	I0817 21:43:31.729106  226555 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0817 21:43:31.729113  226555 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0817 21:43:31.729118  226555 command_runner.go:130] > # IP address on which the stream server will listen.
	I0817 21:43:31.729132  226555 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0817 21:43:31.729141  226555 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0817 21:43:31.729151  226555 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0817 21:43:31.729158  226555 command_runner.go:130] > # stream_port = "0"
	I0817 21:43:31.729164  226555 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0817 21:43:31.729170  226555 command_runner.go:130] > # stream_enable_tls = false
	I0817 21:43:31.729176  226555 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0817 21:43:31.729182  226555 command_runner.go:130] > # stream_idle_timeout = ""
	I0817 21:43:31.729188  226555 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0817 21:43:31.729196  226555 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0817 21:43:31.729202  226555 command_runner.go:130] > # minutes.
	I0817 21:43:31.729206  226555 command_runner.go:130] > # stream_tls_cert = ""
	I0817 21:43:31.729214  226555 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0817 21:43:31.729222  226555 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0817 21:43:31.729229  226555 command_runner.go:130] > # stream_tls_key = ""
	I0817 21:43:31.729237  226555 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0817 21:43:31.729246  226555 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0817 21:43:31.729251  226555 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0817 21:43:31.729257  226555 command_runner.go:130] > # stream_tls_ca = ""
	I0817 21:43:31.729264  226555 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:43:31.729273  226555 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0817 21:43:31.729282  226555 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:43:31.729289  226555 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0817 21:43:31.729310  226555 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0817 21:43:31.729319  226555 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0817 21:43:31.729323  226555 command_runner.go:130] > [crio.runtime]
	I0817 21:43:31.729328  226555 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0817 21:43:31.729333  226555 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0817 21:43:31.729340  226555 command_runner.go:130] > # "nofile=1024:2048"
	I0817 21:43:31.729346  226555 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0817 21:43:31.729352  226555 command_runner.go:130] > # default_ulimits = [
	I0817 21:43:31.729355  226555 command_runner.go:130] > # ]
	I0817 21:43:31.729367  226555 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0817 21:43:31.729374  226555 command_runner.go:130] > # no_pivot = false
	I0817 21:43:31.729380  226555 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0817 21:43:31.729388  226555 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0817 21:43:31.729394  226555 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0817 21:43:31.729400  226555 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0817 21:43:31.729410  226555 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0817 21:43:31.729419  226555 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:43:31.729426  226555 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0817 21:43:31.729430  226555 command_runner.go:130] > # Cgroup setting for conmon
	I0817 21:43:31.729439  226555 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0817 21:43:31.729445  226555 command_runner.go:130] > conmon_cgroup = "pod"
	I0817 21:43:31.729452  226555 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0817 21:43:31.729459  226555 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0817 21:43:31.729465  226555 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:43:31.729471  226555 command_runner.go:130] > conmon_env = [
	I0817 21:43:31.729477  226555 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0817 21:43:31.729482  226555 command_runner.go:130] > ]
	I0817 21:43:31.729488  226555 command_runner.go:130] > # Additional environment variables to set for all the
	I0817 21:43:31.729495  226555 command_runner.go:130] > # containers. These are overridden if set in the
	I0817 21:43:31.729501  226555 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0817 21:43:31.729507  226555 command_runner.go:130] > # default_env = [
	I0817 21:43:31.729510  226555 command_runner.go:130] > # ]
	I0817 21:43:31.729518  226555 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0817 21:43:31.729526  226555 command_runner.go:130] > # selinux = false
	I0817 21:43:31.729535  226555 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0817 21:43:31.729540  226555 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0817 21:43:31.729546  226555 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0817 21:43:31.729552  226555 command_runner.go:130] > # seccomp_profile = ""
	I0817 21:43:31.729557  226555 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0817 21:43:31.729565  226555 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0817 21:43:31.729572  226555 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0817 21:43:31.729582  226555 command_runner.go:130] > # which might increase security.
	I0817 21:43:31.729589  226555 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0817 21:43:31.729601  226555 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0817 21:43:31.729611  226555 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0817 21:43:31.729624  226555 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0817 21:43:31.729636  226555 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0817 21:43:31.729647  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:43:31.729658  226555 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0817 21:43:31.729670  226555 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0817 21:43:31.729676  226555 command_runner.go:130] > # the cgroup blockio controller.
	I0817 21:43:31.729690  226555 command_runner.go:130] > # blockio_config_file = ""
	I0817 21:43:31.729704  226555 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0817 21:43:31.729710  226555 command_runner.go:130] > # irqbalance daemon.
	I0817 21:43:31.729722  226555 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0817 21:43:31.729733  226555 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0817 21:43:31.729741  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:43:31.729745  226555 command_runner.go:130] > # rdt_config_file = ""
	I0817 21:43:31.729753  226555 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0817 21:43:31.729757  226555 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0817 21:43:31.729765  226555 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0817 21:43:31.729772  226555 command_runner.go:130] > # separate_pull_cgroup = ""
	I0817 21:43:31.729778  226555 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0817 21:43:31.729787  226555 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0817 21:43:31.729791  226555 command_runner.go:130] > # will be added.
	I0817 21:43:31.729795  226555 command_runner.go:130] > # default_capabilities = [
	I0817 21:43:31.729801  226555 command_runner.go:130] > # 	"CHOWN",
	I0817 21:43:31.729805  226555 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0817 21:43:31.729808  226555 command_runner.go:130] > # 	"FSETID",
	I0817 21:43:31.729816  226555 command_runner.go:130] > # 	"FOWNER",
	I0817 21:43:31.729823  226555 command_runner.go:130] > # 	"SETGID",
	I0817 21:43:31.729826  226555 command_runner.go:130] > # 	"SETUID",
	I0817 21:43:31.729830  226555 command_runner.go:130] > # 	"SETPCAP",
	I0817 21:43:31.729834  226555 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0817 21:43:31.729839  226555 command_runner.go:130] > # 	"KILL",
	I0817 21:43:31.729843  226555 command_runner.go:130] > # ]
	I0817 21:43:31.729851  226555 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0817 21:43:31.729857  226555 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:43:31.729863  226555 command_runner.go:130] > # default_sysctls = [
	I0817 21:43:31.729867  226555 command_runner.go:130] > # ]
	I0817 21:43:31.729874  226555 command_runner.go:130] > # List of devices on the host that a
	I0817 21:43:31.729880  226555 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0817 21:43:31.729886  226555 command_runner.go:130] > # allowed_devices = [
	I0817 21:43:31.729889  226555 command_runner.go:130] > # 	"/dev/fuse",
	I0817 21:43:31.729895  226555 command_runner.go:130] > # ]
	I0817 21:43:31.729900  226555 command_runner.go:130] > # List of additional devices. specified as
	I0817 21:43:31.729909  226555 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0817 21:43:31.729917  226555 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0817 21:43:31.729950  226555 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:43:31.729965  226555 command_runner.go:130] > # additional_devices = [
	I0817 21:43:31.729968  226555 command_runner.go:130] > # ]
	I0817 21:43:31.729973  226555 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0817 21:43:31.729977  226555 command_runner.go:130] > # cdi_spec_dirs = [
	I0817 21:43:31.729981  226555 command_runner.go:130] > # 	"/etc/cdi",
	I0817 21:43:31.729987  226555 command_runner.go:130] > # 	"/var/run/cdi",
	I0817 21:43:31.729990  226555 command_runner.go:130] > # ]
	I0817 21:43:31.729996  226555 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0817 21:43:31.730004  226555 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0817 21:43:31.730008  226555 command_runner.go:130] > # Defaults to false.
	I0817 21:43:31.730015  226555 command_runner.go:130] > # device_ownership_from_security_context = false
	I0817 21:43:31.730021  226555 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0817 21:43:31.730027  226555 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0817 21:43:31.730033  226555 command_runner.go:130] > # hooks_dir = [
	I0817 21:43:31.730037  226555 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0817 21:43:31.730041  226555 command_runner.go:130] > # ]
	I0817 21:43:31.730049  226555 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0817 21:43:31.730070  226555 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0817 21:43:31.730078  226555 command_runner.go:130] > # its default mounts from the following two files:
	I0817 21:43:31.730087  226555 command_runner.go:130] > #
	I0817 21:43:31.730095  226555 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0817 21:43:31.730103  226555 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0817 21:43:31.730109  226555 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0817 21:43:31.730115  226555 command_runner.go:130] > #
	I0817 21:43:31.730121  226555 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0817 21:43:31.730134  226555 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0817 21:43:31.730143  226555 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0817 21:43:31.730148  226555 command_runner.go:130] > #      only add mounts it finds in this file.
	I0817 21:43:31.730152  226555 command_runner.go:130] > #
	I0817 21:43:31.730158  226555 command_runner.go:130] > # default_mounts_file = ""
	I0817 21:43:31.730166  226555 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0817 21:43:31.730172  226555 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0817 21:43:31.730178  226555 command_runner.go:130] > pids_limit = 1024
	I0817 21:43:31.730184  226555 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0817 21:43:31.730195  226555 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0817 21:43:31.730204  226555 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0817 21:43:31.730211  226555 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0817 21:43:31.730217  226555 command_runner.go:130] > # log_size_max = -1
	I0817 21:43:31.730224  226555 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0817 21:43:31.730230  226555 command_runner.go:130] > # log_to_journald = false
	I0817 21:43:31.730236  226555 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0817 21:43:31.730243  226555 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0817 21:43:31.730248  226555 command_runner.go:130] > # Path to directory for container attach sockets.
	I0817 21:43:31.730255  226555 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0817 21:43:31.730261  226555 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0817 21:43:31.730267  226555 command_runner.go:130] > # bind_mount_prefix = ""
	I0817 21:43:31.730272  226555 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0817 21:43:31.730279  226555 command_runner.go:130] > # read_only = false
	I0817 21:43:31.730285  226555 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0817 21:43:31.730293  226555 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0817 21:43:31.730297  226555 command_runner.go:130] > # live configuration reload.
	I0817 21:43:31.730304  226555 command_runner.go:130] > # log_level = "info"
	I0817 21:43:31.730311  226555 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0817 21:43:31.730318  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:43:31.730322  226555 command_runner.go:130] > # log_filter = ""
	I0817 21:43:31.730331  226555 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0817 21:43:31.730337  226555 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0817 21:43:31.730343  226555 command_runner.go:130] > # separated by comma.
	I0817 21:43:31.730347  226555 command_runner.go:130] > # uid_mappings = ""
	I0817 21:43:31.730355  226555 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0817 21:43:31.730361  226555 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0817 21:43:31.730367  226555 command_runner.go:130] > # separated by comma.
	I0817 21:43:31.730371  226555 command_runner.go:130] > # gid_mappings = ""
	I0817 21:43:31.730377  226555 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0817 21:43:31.730385  226555 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:43:31.730391  226555 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:43:31.730397  226555 command_runner.go:130] > # minimum_mappable_uid = -1
	I0817 21:43:31.730403  226555 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0817 21:43:31.730411  226555 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:43:31.730417  226555 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:43:31.730426  226555 command_runner.go:130] > # minimum_mappable_gid = -1
	I0817 21:43:31.730432  226555 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0817 21:43:31.730443  226555 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0817 21:43:31.730450  226555 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0817 21:43:31.730454  226555 command_runner.go:130] > # ctr_stop_timeout = 30
	I0817 21:43:31.730462  226555 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0817 21:43:31.730468  226555 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0817 21:43:31.730475  226555 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0817 21:43:31.730480  226555 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0817 21:43:31.730486  226555 command_runner.go:130] > drop_infra_ctr = false
	I0817 21:43:31.730494  226555 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0817 21:43:31.730500  226555 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0817 21:43:31.730509  226555 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0817 21:43:31.730513  226555 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0817 21:43:31.730519  226555 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0817 21:43:31.730524  226555 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0817 21:43:31.730529  226555 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0817 21:43:31.730536  226555 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0817 21:43:31.730544  226555 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0817 21:43:31.730550  226555 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0817 21:43:31.730559  226555 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0817 21:43:31.730565  226555 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0817 21:43:31.730571  226555 command_runner.go:130] > # default_runtime = "runc"
	I0817 21:43:31.730580  226555 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0817 21:43:31.730592  226555 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0817 21:43:31.730615  226555 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0817 21:43:31.730626  226555 command_runner.go:130] > # creation as a file is not desired either.
	I0817 21:43:31.730642  226555 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0817 21:43:31.730653  226555 command_runner.go:130] > # the hostname is being managed dynamically.
	I0817 21:43:31.730663  226555 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0817 21:43:31.730668  226555 command_runner.go:130] > # ]
	I0817 21:43:31.730686  226555 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0817 21:43:31.730698  226555 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0817 21:43:31.730707  226555 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0817 21:43:31.730713  226555 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0817 21:43:31.730718  226555 command_runner.go:130] > #
	I0817 21:43:31.730727  226555 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0817 21:43:31.730734  226555 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0817 21:43:31.730739  226555 command_runner.go:130] > #  runtime_type = "oci"
	I0817 21:43:31.730746  226555 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0817 21:43:31.730751  226555 command_runner.go:130] > #  privileged_without_host_devices = false
	I0817 21:43:31.730757  226555 command_runner.go:130] > #  allowed_annotations = []
	I0817 21:43:31.730761  226555 command_runner.go:130] > # Where:
	I0817 21:43:31.730769  226555 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0817 21:43:31.730775  226555 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0817 21:43:31.730781  226555 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0817 21:43:31.730789  226555 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0817 21:43:31.730792  226555 command_runner.go:130] > #   in $PATH.
	I0817 21:43:31.730801  226555 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0817 21:43:31.730806  226555 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0817 21:43:31.730812  226555 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0817 21:43:31.730816  226555 command_runner.go:130] > #   state.
	I0817 21:43:31.730822  226555 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0817 21:43:31.730830  226555 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0817 21:43:31.730839  226555 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0817 21:43:31.730846  226555 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0817 21:43:31.730852  226555 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0817 21:43:31.730865  226555 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0817 21:43:31.730872  226555 command_runner.go:130] > #   The currently recognized values are:
	I0817 21:43:31.730878  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0817 21:43:31.730885  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0817 21:43:31.730894  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0817 21:43:31.730900  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0817 21:43:31.730909  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0817 21:43:31.730916  226555 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0817 21:43:31.730921  226555 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0817 21:43:31.730930  226555 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0817 21:43:31.730935  226555 command_runner.go:130] > #   should be moved to the container's cgroup
	I0817 21:43:31.730941  226555 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0817 21:43:31.730946  226555 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0817 21:43:31.730952  226555 command_runner.go:130] > runtime_type = "oci"
	I0817 21:43:31.730957  226555 command_runner.go:130] > runtime_root = "/run/runc"
	I0817 21:43:31.730966  226555 command_runner.go:130] > runtime_config_path = ""
	I0817 21:43:31.730970  226555 command_runner.go:130] > monitor_path = ""
	I0817 21:43:31.730974  226555 command_runner.go:130] > monitor_cgroup = ""
	I0817 21:43:31.730978  226555 command_runner.go:130] > monitor_exec_cgroup = ""
	I0817 21:43:31.730983  226555 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0817 21:43:31.730990  226555 command_runner.go:130] > # running containers
	I0817 21:43:31.730994  226555 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0817 21:43:31.731002  226555 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0817 21:43:31.731092  226555 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0817 21:43:31.731108  226555 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0817 21:43:31.731113  226555 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0817 21:43:31.731118  226555 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0817 21:43:31.731122  226555 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0817 21:43:31.731130  226555 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0817 21:43:31.731137  226555 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0817 21:43:31.731142  226555 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0817 21:43:31.731149  226555 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0817 21:43:31.731154  226555 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0817 21:43:31.731164  226555 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0817 21:43:31.731171  226555 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0817 21:43:31.731181  226555 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0817 21:43:31.731187  226555 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0817 21:43:31.731196  226555 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0817 21:43:31.731206  226555 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0817 21:43:31.731214  226555 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0817 21:43:31.731223  226555 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0817 21:43:31.731226  226555 command_runner.go:130] > # Example:
	I0817 21:43:31.731231  226555 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0817 21:43:31.731241  226555 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0817 21:43:31.731249  226555 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0817 21:43:31.731253  226555 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0817 21:43:31.731258  226555 command_runner.go:130] > # cpuset = 0
	I0817 21:43:31.731265  226555 command_runner.go:130] > # cpushares = "0-1"
	I0817 21:43:31.731271  226555 command_runner.go:130] > # Where:
	I0817 21:43:31.731276  226555 command_runner.go:130] > # The workload name is workload-type.
	I0817 21:43:31.731285  226555 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0817 21:43:31.731293  226555 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0817 21:43:31.731301  226555 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0817 21:43:31.731309  226555 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0817 21:43:31.731316  226555 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0817 21:43:31.731319  226555 command_runner.go:130] > # 
	I0817 21:43:31.731326  226555 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0817 21:43:31.731331  226555 command_runner.go:130] > #
	I0817 21:43:31.731337  226555 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0817 21:43:31.731343  226555 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0817 21:43:31.731349  226555 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0817 21:43:31.731357  226555 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0817 21:43:31.731363  226555 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0817 21:43:31.731369  226555 command_runner.go:130] > [crio.image]
	I0817 21:43:31.731374  226555 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0817 21:43:31.731381  226555 command_runner.go:130] > # default_transport = "docker://"
	I0817 21:43:31.731387  226555 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0817 21:43:31.731395  226555 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:43:31.731399  226555 command_runner.go:130] > # global_auth_file = ""
	I0817 21:43:31.731410  226555 command_runner.go:130] > # The image used to instantiate infra containers.
	I0817 21:43:31.731417  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:43:31.731422  226555 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0817 21:43:31.731428  226555 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0817 21:43:31.731438  226555 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:43:31.731443  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:43:31.731449  226555 command_runner.go:130] > # pause_image_auth_file = ""
	I0817 21:43:31.731471  226555 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0817 21:43:31.731483  226555 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0817 21:43:31.731492  226555 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0817 21:43:31.731498  226555 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0817 21:43:31.731504  226555 command_runner.go:130] > # pause_command = "/pause"
	I0817 21:43:31.731510  226555 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0817 21:43:31.731519  226555 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0817 21:43:31.731525  226555 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0817 21:43:31.731533  226555 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0817 21:43:31.731538  226555 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0817 21:43:31.731545  226555 command_runner.go:130] > # signature_policy = ""
	I0817 21:43:31.731553  226555 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0817 21:43:31.731562  226555 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0817 21:43:31.731566  226555 command_runner.go:130] > # changing them here.
	I0817 21:43:31.731573  226555 command_runner.go:130] > # insecure_registries = [
	I0817 21:43:31.731578  226555 command_runner.go:130] > # ]
	I0817 21:43:31.731595  226555 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0817 21:43:31.731604  226555 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0817 21:43:31.731612  226555 command_runner.go:130] > # image_volumes = "mkdir"
	I0817 21:43:31.731620  226555 command_runner.go:130] > # Temporary directory to use for storing big files
	I0817 21:43:31.731627  226555 command_runner.go:130] > # big_files_temporary_dir = ""
	I0817 21:43:31.731636  226555 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0817 21:43:31.731641  226555 command_runner.go:130] > # CNI plugins.
	I0817 21:43:31.731647  226555 command_runner.go:130] > [crio.network]
	I0817 21:43:31.731656  226555 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0817 21:43:31.731664  226555 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0817 21:43:31.731670  226555 command_runner.go:130] > # cni_default_network = ""
	I0817 21:43:31.731678  226555 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0817 21:43:31.731685  226555 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0817 21:43:31.731697  226555 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0817 21:43:31.731703  226555 command_runner.go:130] > # plugin_dirs = [
	I0817 21:43:31.731709  226555 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0817 21:43:31.731714  226555 command_runner.go:130] > # ]
	I0817 21:43:31.731722  226555 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0817 21:43:31.731727  226555 command_runner.go:130] > [crio.metrics]
	I0817 21:43:31.731732  226555 command_runner.go:130] > # Globally enable or disable metrics support.
	I0817 21:43:31.731735  226555 command_runner.go:130] > enable_metrics = true
	I0817 21:43:31.731740  226555 command_runner.go:130] > # Specify enabled metrics collectors.
	I0817 21:43:31.731744  226555 command_runner.go:130] > # Per default all metrics are enabled.
	I0817 21:43:31.731750  226555 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0817 21:43:31.731758  226555 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0817 21:43:31.731763  226555 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0817 21:43:31.731767  226555 command_runner.go:130] > # metrics_collectors = [
	I0817 21:43:31.731770  226555 command_runner.go:130] > # 	"operations",
	I0817 21:43:31.731775  226555 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0817 21:43:31.731779  226555 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0817 21:43:31.731782  226555 command_runner.go:130] > # 	"operations_errors",
	I0817 21:43:31.731789  226555 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0817 21:43:31.731793  226555 command_runner.go:130] > # 	"image_pulls_by_name",
	I0817 21:43:31.731797  226555 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0817 21:43:31.731801  226555 command_runner.go:130] > # 	"image_pulls_failures",
	I0817 21:43:31.731804  226555 command_runner.go:130] > # 	"image_pulls_successes",
	I0817 21:43:31.731808  226555 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0817 21:43:31.731812  226555 command_runner.go:130] > # 	"image_layer_reuse",
	I0817 21:43:31.731816  226555 command_runner.go:130] > # 	"containers_oom_total",
	I0817 21:43:31.731821  226555 command_runner.go:130] > # 	"containers_oom",
	I0817 21:43:31.731825  226555 command_runner.go:130] > # 	"processes_defunct",
	I0817 21:43:31.731831  226555 command_runner.go:130] > # 	"operations_total",
	I0817 21:43:31.731835  226555 command_runner.go:130] > # 	"operations_latency_seconds",
	I0817 21:43:31.731843  226555 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0817 21:43:31.731847  226555 command_runner.go:130] > # 	"operations_errors_total",
	I0817 21:43:31.731854  226555 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0817 21:43:31.731858  226555 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0817 21:43:31.731864  226555 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0817 21:43:31.731869  226555 command_runner.go:130] > # 	"image_pulls_success_total",
	I0817 21:43:31.731877  226555 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0817 21:43:31.731884  226555 command_runner.go:130] > # 	"containers_oom_count_total",
	I0817 21:43:31.731887  226555 command_runner.go:130] > # ]
	I0817 21:43:31.731892  226555 command_runner.go:130] > # The port on which the metrics server will listen.
	I0817 21:43:31.731897  226555 command_runner.go:130] > # metrics_port = 9090
	I0817 21:43:31.731902  226555 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0817 21:43:31.731908  226555 command_runner.go:130] > # metrics_socket = ""
	I0817 21:43:31.731913  226555 command_runner.go:130] > # The certificate for the secure metrics server.
	I0817 21:43:31.731921  226555 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0817 21:43:31.731927  226555 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0817 21:43:31.731933  226555 command_runner.go:130] > # certificate on any modification event.
	I0817 21:43:31.731937  226555 command_runner.go:130] > # metrics_cert = ""
	I0817 21:43:31.731944  226555 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0817 21:43:31.731949  226555 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0817 21:43:31.731955  226555 command_runner.go:130] > # metrics_key = ""
	I0817 21:43:31.731961  226555 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0817 21:43:31.731967  226555 command_runner.go:130] > [crio.tracing]
	I0817 21:43:31.731972  226555 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0817 21:43:31.731981  226555 command_runner.go:130] > # enable_tracing = false
	I0817 21:43:31.731986  226555 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0817 21:43:31.731994  226555 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0817 21:43:31.731999  226555 command_runner.go:130] > # Number of samples to collect per million spans.
	I0817 21:43:31.732006  226555 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0817 21:43:31.732012  226555 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0817 21:43:31.732018  226555 command_runner.go:130] > [crio.stats]
	I0817 21:43:31.732023  226555 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0817 21:43:31.732031  226555 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0817 21:43:31.732036  226555 command_runner.go:130] > # stats_collection_period = 0
	I0817 21:43:31.732120  226555 cni.go:84] Creating CNI manager for ""
	I0817 21:43:31.732136  226555 cni.go:136] 3 nodes found, recommending kindnet
	I0817 21:43:31.732157  226555 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:43:31.732178  226555 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-959371 NodeName:multinode-959371 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:43:31.732312  226555 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-959371"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:43:31.732425  226555 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-959371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:43:31.732486  226555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:43:31.742181  226555 command_runner.go:130] > kubeadm
	I0817 21:43:31.742208  226555 command_runner.go:130] > kubectl
	I0817 21:43:31.742215  226555 command_runner.go:130] > kubelet
	I0817 21:43:31.742241  226555 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:43:31.742305  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:43:31.751221  226555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0817 21:43:31.768357  226555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:43:31.787277  226555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0817 21:43:31.808198  226555 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0817 21:43:31.812458  226555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:43:31.826515  226555 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371 for IP: 192.168.39.104
	I0817 21:43:31.826582  226555 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:43:31.826760  226555 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 21:43:31.826815  226555 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 21:43:31.826926  226555 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key
	I0817 21:43:31.827008  226555 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key.a10f9b59
	I0817 21:43:31.827063  226555 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.key
	I0817 21:43:31.827082  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0817 21:43:31.827101  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0817 21:43:31.827119  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0817 21:43:31.827164  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0817 21:43:31.827183  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:43:31.827202  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:43:31.827221  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:43:31.827239  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:43:31.827322  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 21:43:31.827362  226555 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 21:43:31.827378  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:43:31.827414  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 21:43:31.827452  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:43:31.827492  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 21:43:31.827551  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:43:31.827594  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /usr/share/ca-certificates/2106702.pem
	I0817 21:43:31.827615  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:43:31.827633  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem -> /usr/share/ca-certificates/210670.pem
	I0817 21:43:31.828437  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:43:31.856512  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 21:43:31.882264  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:43:31.906849  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 21:43:31.931757  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:43:31.957378  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:43:31.983580  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:43:32.013217  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:43:32.039088  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 21:43:32.063767  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:43:32.091065  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 21:43:32.121058  226555 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:43:32.141593  226555 ssh_runner.go:195] Run: openssl version
	I0817 21:43:32.147748  226555 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0817 21:43:32.147846  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 21:43:32.159589  226555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 21:43:32.165174  226555 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:43:32.165225  226555 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:43:32.165284  226555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 21:43:32.171761  226555 command_runner.go:130] > 3ec20f2e
	I0817 21:43:32.171856  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:43:32.183988  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:43:32.195842  226555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:43:32.201184  226555 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:43:32.201355  226555 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:43:32.201414  226555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:43:32.207697  226555 command_runner.go:130] > b5213941
	I0817 21:43:32.207887  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:43:32.219493  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 21:43:32.231637  226555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 21:43:32.237034  226555 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:43:32.237109  226555 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:43:32.237172  226555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 21:43:32.243618  226555 command_runner.go:130] > 51391683
	I0817 21:43:32.243814  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 21:43:32.257717  226555 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:43:32.262863  226555 command_runner.go:130] > ca.crt
	I0817 21:43:32.262887  226555 command_runner.go:130] > ca.key
	I0817 21:43:32.262893  226555 command_runner.go:130] > healthcheck-client.crt
	I0817 21:43:32.262897  226555 command_runner.go:130] > healthcheck-client.key
	I0817 21:43:32.262903  226555 command_runner.go:130] > peer.crt
	I0817 21:43:32.262906  226555 command_runner.go:130] > peer.key
	I0817 21:43:32.262910  226555 command_runner.go:130] > server.crt
	I0817 21:43:32.262913  226555 command_runner.go:130] > server.key
	I0817 21:43:32.263094  226555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 21:43:32.270684  226555 command_runner.go:130] > Certificate will not expire
	I0817 21:43:32.270767  226555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 21:43:32.277608  226555 command_runner.go:130] > Certificate will not expire
	I0817 21:43:32.277737  226555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 21:43:32.284421  226555 command_runner.go:130] > Certificate will not expire
	I0817 21:43:32.284643  226555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 21:43:32.291573  226555 command_runner.go:130] > Certificate will not expire
	I0817 21:43:32.292038  226555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 21:43:32.299413  226555 command_runner.go:130] > Certificate will not expire
	I0817 21:43:32.299654  226555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 21:43:32.306124  226555 command_runner.go:130] > Certificate will not expire
	I0817 21:43:32.306221  226555 kubeadm.go:404] StartCluster: {Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0}
	I0817 21:43:32.306372  226555 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 21:43:32.306448  226555 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:43:32.341385  226555 cri.go:89] found id: ""
	I0817 21:43:32.341511  226555 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:43:32.352418  226555 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0817 21:43:32.352442  226555 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0817 21:43:32.352448  226555 command_runner.go:130] > /var/lib/minikube/etcd:
	I0817 21:43:32.352452  226555 command_runner.go:130] > member
	I0817 21:43:32.352466  226555 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 21:43:32.352476  226555 kubeadm.go:636] restartCluster start
	I0817 21:43:32.352545  226555 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 21:43:32.363146  226555 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:32.364062  226555 kubeconfig.go:92] found "multinode-959371" server: "https://192.168.39.104:8443"
	I0817 21:43:32.364787  226555 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:43:32.365198  226555 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:43:32.366305  226555 cert_rotation.go:137] Starting client certificate rotation controller
	I0817 21:43:32.366640  226555 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 21:43:32.376616  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:32.376698  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:32.388874  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:32.388902  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:32.388946  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:32.400539  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:32.901400  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:32.901505  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:32.913666  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:33.400879  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:33.400997  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:33.413591  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:33.900789  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:33.900910  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:33.913868  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:34.401494  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:34.401611  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:34.413535  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:34.901108  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:34.901222  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:34.912933  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:35.401571  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:35.401689  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:35.413600  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:35.900751  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:35.900883  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:35.913135  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:36.400719  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:36.400830  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:36.413501  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:36.901042  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:36.901139  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:36.912703  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:37.401395  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:37.401479  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:37.413441  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:37.901579  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:37.901684  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:37.913285  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:38.400847  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:38.400956  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:38.412521  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:38.901076  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:38.901160  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:38.913797  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:39.401366  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:39.401473  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:39.412996  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:39.901635  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:39.901724  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:39.913073  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:40.400704  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:40.400817  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:40.412480  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:40.901706  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:40.901812  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:40.913480  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:41.401043  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:41.401165  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:41.412441  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:41.900976  226555 api_server.go:166] Checking apiserver status ...
	I0817 21:43:41.901071  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 21:43:41.913104  226555 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:43:42.377459  226555 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 21:43:42.377520  226555 kubeadm.go:1128] stopping kube-system containers ...
	I0817 21:43:42.377542  226555 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 21:43:42.377627  226555 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:43:42.408176  226555 cri.go:89] found id: ""
	I0817 21:43:42.408256  226555 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 21:43:42.423408  226555 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:43:42.432532  226555 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0817 21:43:42.432557  226555 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0817 21:43:42.432564  226555 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0817 21:43:42.432573  226555 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:43:42.432600  226555 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:43:42.432674  226555 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:43:42.441875  226555 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 21:43:42.441911  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:43:42.550632  226555 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:43:42.551150  226555 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0817 21:43:42.551653  226555 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0817 21:43:42.552264  226555 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 21:43:42.553160  226555 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0817 21:43:42.553739  226555 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0817 21:43:42.554639  226555 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0817 21:43:42.555134  226555 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0817 21:43:42.555701  226555 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0817 21:43:42.556187  226555 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 21:43:42.556737  226555 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 21:43:42.557587  226555 command_runner.go:130] > [certs] Using the existing "sa" key
	I0817 21:43:42.558851  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:43:42.611772  226555 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:43:42.657833  226555 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:43:42.890575  226555 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:43:43.083646  226555 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:43:43.206445  226555 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:43:43.209127  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:43:43.280980  226555 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:43:43.281008  226555 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:43:43.281014  226555 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0817 21:43:43.439930  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:43:43.519178  226555 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:43:43.519205  226555 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:43:43.528910  226555 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:43:43.529545  226555 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:43:43.531974  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:43:43.606233  226555 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:43:43.610159  226555 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:43:43.610230  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:43:43.624271  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:43:44.149154  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:43:44.649341  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:43:45.148725  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:43:45.648535  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:43:46.148809  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:43:46.172749  226555 command_runner.go:130] > 1109
	I0817 21:43:46.172828  226555 api_server.go:72] duration metric: took 2.562673532s to wait for apiserver process to appear ...
	I0817 21:43:46.172844  226555 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:43:46.172868  226555 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0817 21:43:49.807760  226555 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 21:43:49.807795  226555 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 21:43:49.807811  226555 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0817 21:43:49.867176  226555 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 21:43:49.867209  226555 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 21:43:50.368009  226555 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0817 21:43:50.373087  226555 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 21:43:50.373114  226555 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 21:43:50.868048  226555 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0817 21:43:50.895034  226555 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 21:43:50.895072  226555 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 21:43:51.367558  226555 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0817 21:43:51.378022  226555 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0817 21:43:51.378164  226555 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I0817 21:43:51.378175  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:51.378183  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:51.378193  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:51.393576  226555 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0817 21:43:51.393612  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:51.393625  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:51.393633  226555 round_trippers.go:580]     Content-Length: 263
	I0817 21:43:51.393641  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:51 GMT
	I0817 21:43:51.393651  226555 round_trippers.go:580]     Audit-Id: a1777034-eef2-4b03-a51e-e359a04965ca
	I0817 21:43:51.393660  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:51.393671  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:51.393683  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:51.393719  226555 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0817 21:43:51.393828  226555 api_server.go:141] control plane version: v1.27.4
	I0817 21:43:51.393854  226555 api_server.go:131] duration metric: took 5.220994854s to wait for apiserver health ...
	I0817 21:43:51.393870  226555 cni.go:84] Creating CNI manager for ""
	I0817 21:43:51.393887  226555 cni.go:136] 3 nodes found, recommending kindnet
	I0817 21:43:51.396204  226555 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 21:43:51.397991  226555 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:43:51.414245  226555 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0817 21:43:51.414277  226555 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0817 21:43:51.414285  226555 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0817 21:43:51.414295  226555 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:43:51.414303  226555 command_runner.go:130] > Access: 2023-08-17 21:43:18.834579600 +0000
	I0817 21:43:51.414311  226555 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0817 21:43:51.414319  226555 command_runner.go:130] > Change: 2023-08-17 21:43:16.782579600 +0000
	I0817 21:43:51.414329  226555 command_runner.go:130] >  Birth: -
	I0817 21:43:51.414652  226555 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:43:51.414677  226555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:43:51.450158  226555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:43:52.880029  226555 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:43:52.880061  226555 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:43:52.880071  226555 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0817 21:43:52.880080  226555 command_runner.go:130] > daemonset.apps/kindnet configured
	I0817 21:43:52.880098  226555 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.429913143s)
	I0817 21:43:52.880129  226555 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:43:52.880256  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:43:52.880266  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:52.880274  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:52.880281  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:52.884866  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:43:52.884900  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:52.884909  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:52.884920  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:52.884926  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:52 GMT
	I0817 21:43:52.884931  226555 round_trippers.go:580]     Audit-Id: f6c79720-888f-494d-bf32-4178795776c7
	I0817 21:43:52.884937  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:52.884942  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:52.887086  226555 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"831"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82649 chars]
	I0817 21:43:52.891358  226555 system_pods.go:59] 12 kube-system pods found
	I0817 21:43:52.891408  226555 system_pods.go:61] "coredns-5d78c9869d-87rlb" [52da85e0-72f0-4919-8615-d1cb46b65ca4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 21:43:52.891419  226555 system_pods.go:61] "etcd-multinode-959371" [0ffe6db5-4285-4788-88b2-073753ece5f3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 21:43:52.891424  226555 system_pods.go:61] "kindnet-cmxkw" [0118d29d-3f4f-460d-b5ab-653c3b98d7fa] Running
	I0817 21:43:52.891434  226555 system_pods.go:61] "kindnet-s7l7j" [6af177c8-cc30-4a86-98d8-443cef5036d8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 21:43:52.891441  226555 system_pods.go:61] "kindnet-xjn26" [78b21525-477a-49fa-8fb9-12ba1f58c418] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 21:43:52.891448  226555 system_pods.go:61] "kube-apiserver-multinode-959371" [0efb1ae7-705a-47df-91c6-0d9390b68983] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 21:43:52.891459  226555 system_pods.go:61] "kube-controller-manager-multinode-959371" [00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 21:43:52.891469  226555 system_pods.go:61] "kube-proxy-8gdf7" [00e6f433-51d6-49bb-a927-780720361eb3] Running
	I0817 21:43:52.891474  226555 system_pods.go:61] "kube-proxy-g94gj" [050b1eab-a69f-4f6f-b3b8-f29ef38c9042] Running
	I0817 21:43:52.891480  226555 system_pods.go:61] "kube-proxy-zmldj" [ac59040d-df0c-416f-9660-4a41f7b75789] Running
	I0817 21:43:52.891486  226555 system_pods.go:61] "kube-scheduler-multinode-959371" [a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 21:43:52.891492  226555 system_pods.go:61] "storage-provisioner" [e8aa1192-3588-49da-be88-15a801d006fc] Running
	I0817 21:43:52.891498  226555 system_pods.go:74] duration metric: took 11.361707ms to wait for pod list to return data ...
	I0817 21:43:52.891507  226555 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:43:52.891576  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I0817 21:43:52.891584  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:52.891591  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:52.891597  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:52.894976  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:52.895006  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:52.895017  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:52 GMT
	I0817 21:43:52.895031  226555 round_trippers.go:580]     Audit-Id: a56be4bf-fcd5-4fc7-aba6-823ef8ca4224
	I0817 21:43:52.895037  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:52.895042  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:52.895049  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:52.895058  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:52.895614  226555 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"831"},"items":[{"metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"750","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15372 chars]
	I0817 21:43:52.896412  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:43:52.896435  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:43:52.896475  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:43:52.896482  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:43:52.896487  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:43:52.896496  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:43:52.896502  226555 node_conditions.go:105] duration metric: took 4.986735ms to run NodePressure ...
	I0817 21:43:52.896535  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:43:53.193942  226555 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0817 21:43:53.193972  226555 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0817 21:43:53.194002  226555 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 21:43:53.194117  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0817 21:43:53.194126  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.194138  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.194147  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.198517  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:43:53.198538  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.198546  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.198551  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.198557  226555 round_trippers.go:580]     Audit-Id: 23916093-83c2-4dab-9cda-8c3e43f06ad3
	I0817 21:43:53.198562  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.198568  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.198573  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.199718  226555 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"etcd-multinode-959371","namespace":"kube-system","uid":"0ffe6db5-4285-4788-88b2-073753ece5f3","resourceVersion":"787","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.104:2379","kubernetes.io/config.hash":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.mirror":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.seen":"2023-08-17T21:33:26.519088298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0817 21:43:53.200723  226555 kubeadm.go:787] kubelet initialised
	I0817 21:43:53.200745  226555 kubeadm.go:788] duration metric: took 6.73073ms waiting for restarted kubelet to initialise ...
	I0817 21:43:53.200756  226555 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:43:53.200835  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:43:53.200846  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.200857  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.200868  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.208579  226555 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0817 21:43:53.208601  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.208608  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.208614  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.208626  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.208648  226555 round_trippers.go:580]     Audit-Id: de58b83c-a1c7-4dcc-8606-feee3f34869e
	I0817 21:43:53.208656  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.208664  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.209256  226555 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82649 chars]
	I0817 21:43:53.211702  226555 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:53.211802  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:53.211813  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.211824  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.211835  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.214918  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:53.214934  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.214941  226555 round_trippers.go:580]     Audit-Id: f7c86a19-ed4f-4de7-b77b-c8fdc790984b
	I0817 21:43:53.214950  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.214958  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.214973  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.214982  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.214994  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.215536  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0817 21:43:53.215933  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:53.215946  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.215956  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.215965  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.218675  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:53.218692  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.218699  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.218705  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.218716  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.218727  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.218735  226555 round_trippers.go:580]     Audit-Id: 4f673852-ad62-42e7-8933-6fbf026b61e8
	I0817 21:43:53.218745  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.218876  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"750","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0817 21:43:53.219206  226555 pod_ready.go:97] node "multinode-959371" hosting pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.219228  226555 pod_ready.go:81] duration metric: took 7.498869ms waiting for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	E0817 21:43:53.219238  226555 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-959371" hosting pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.219248  226555 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:53.219304  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-959371
	I0817 21:43:53.219314  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.219325  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.219335  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.222532  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:53.222550  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.222556  226555 round_trippers.go:580]     Audit-Id: 1e97ceb8-91ed-41b3-af61-64d21e992667
	I0817 21:43:53.222562  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.222568  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.222577  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.222587  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.222595  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.222751  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-959371","namespace":"kube-system","uid":"0ffe6db5-4285-4788-88b2-073753ece5f3","resourceVersion":"787","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.104:2379","kubernetes.io/config.hash":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.mirror":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.seen":"2023-08-17T21:33:26.519088298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0817 21:43:53.223086  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:53.223100  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.223109  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.223118  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.224906  226555 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:43:53.224927  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.224938  226555 round_trippers.go:580]     Audit-Id: 15e84601-8d5d-475e-bc4a-6e51f7693934
	I0817 21:43:53.224948  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.224958  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.224967  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.224981  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.224991  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.225151  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"750","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0817 21:43:53.225533  226555 pod_ready.go:97] node "multinode-959371" hosting pod "etcd-multinode-959371" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.225560  226555 pod_ready.go:81] duration metric: took 6.300754ms waiting for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	E0817 21:43:53.225570  226555 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-959371" hosting pod "etcd-multinode-959371" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.225598  226555 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:53.225661  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-959371
	I0817 21:43:53.225671  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.225680  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.225690  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.232369  226555 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0817 21:43:53.232387  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.232401  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.232409  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.232417  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.232425  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.232436  226555 round_trippers.go:580]     Audit-Id: 25864aca-3fb6-41f9-b3ab-0a22f32fac67
	I0817 21:43:53.232443  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.232617  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-959371","namespace":"kube-system","uid":"0efb1ae7-705a-47df-91c6-0d9390b68983","resourceVersion":"794","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.104:8443","kubernetes.io/config.hash":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.mirror":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.seen":"2023-08-17T21:33:26.519082064Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0817 21:43:53.233151  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:53.233171  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.233182  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.233191  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.236191  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:53.236206  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.236212  226555 round_trippers.go:580]     Audit-Id: 0f1ed92e-9e92-4f86-8ef3-86b7c71cef1f
	I0817 21:43:53.236218  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.236224  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.236239  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.236248  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.236261  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.236582  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"750","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0817 21:43:53.236988  226555 pod_ready.go:97] node "multinode-959371" hosting pod "kube-apiserver-multinode-959371" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.237008  226555 pod_ready.go:81] duration metric: took 11.399083ms waiting for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	E0817 21:43:53.237017  226555 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-959371" hosting pod "kube-apiserver-multinode-959371" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.237030  226555 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:53.237091  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:43:53.237099  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.237108  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.237119  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.240791  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:53.240824  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.240835  226555 round_trippers.go:580]     Audit-Id: 17fac5e9-1ec7-4552-bb8e-a3e821d27c08
	I0817 21:43:53.240850  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.240858  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.240870  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.240881  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.240893  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.241046  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:43:53.280849  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:53.280873  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.280881  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.280894  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.283913  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:53.283943  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.283954  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.283964  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.283973  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.283982  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.283998  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.284013  226555 round_trippers.go:580]     Audit-Id: a8712b57-5147-493f-a622-78f24c762eac
	I0817 21:43:53.284386  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"750","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0817 21:43:53.284946  226555 pod_ready.go:97] node "multinode-959371" hosting pod "kube-controller-manager-multinode-959371" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.284974  226555 pod_ready.go:81] duration metric: took 47.933188ms waiting for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	E0817 21:43:53.284986  226555 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-959371" hosting pod "kube-controller-manager-multinode-959371" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.284997  226555 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:53.480406  226555 request.go:628] Waited for 195.311598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:43:53.480490  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:43:53.480495  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.480504  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.480511  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.483508  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:53.483538  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.483550  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.483560  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.483568  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.483577  226555 round_trippers.go:580]     Audit-Id: 0de303c8-4f60-47cb-b839-de628dcbc967
	I0817 21:43:53.483586  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.483595  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.483766  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gdf7","generateName":"kube-proxy-","namespace":"kube-system","uid":"00e6f433-51d6-49bb-a927-780720361eb3","resourceVersion":"831","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0817 21:43:53.680763  226555 request.go:628] Waited for 196.40213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:53.680826  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:53.680831  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.680838  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.680848  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.683888  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:53.683915  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.683926  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.683935  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.683943  226555 round_trippers.go:580]     Audit-Id: 343f34fb-27b4-4364-9bf1-a597527f04ac
	I0817 21:43:53.683952  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.683960  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.683982  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.684225  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"750","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0817 21:43:53.684665  226555 pod_ready.go:97] node "multinode-959371" hosting pod "kube-proxy-8gdf7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.684687  226555 pod_ready.go:81] duration metric: took 399.673946ms waiting for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	E0817 21:43:53.684699  226555 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-959371" hosting pod "kube-proxy-8gdf7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:53.684711  226555 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g94gj" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:53.881228  226555 request.go:628] Waited for 196.432067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g94gj
	I0817 21:43:53.881291  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g94gj
	I0817 21:43:53.881296  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:53.881305  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:53.881311  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:53.884136  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:53.884161  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:53.884168  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:53.884174  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:53.884180  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:53 GMT
	I0817 21:43:53.884185  226555 round_trippers.go:580]     Audit-Id: 131ff139-1e5c-483f-abb0-d2c16abe4c36
	I0817 21:43:53.884194  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:53.884204  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:53.884361  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g94gj","generateName":"kube-proxy-","namespace":"kube-system","uid":"050b1eab-a69f-4f6f-b3b8-f29ef38c9042","resourceVersion":"719","creationTimestamp":"2023-08-17T21:35:12Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:35:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0817 21:43:54.081357  226555 request.go:628] Waited for 196.418752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:43:54.081419  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:43:54.081424  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:54.081432  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:54.081439  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:54.084279  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:54.084304  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:54.084312  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:54.084320  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:54.084329  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:54.084338  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:54 GMT
	I0817 21:43:54.084348  226555 round_trippers.go:580]     Audit-Id: f38821d8-b857-479d-828b-1002a4b941a7
	I0817 21:43:54.084357  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:54.084459  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m03","uid":"31bc1a59-dff0-4542-804e-a9c019ecd2f4","resourceVersion":"746","creationTimestamp":"2023-08-17T21:35:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:35:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3533 chars]
	I0817 21:43:54.084841  226555 pod_ready.go:92] pod "kube-proxy-g94gj" in "kube-system" namespace has status "Ready":"True"
	I0817 21:43:54.084861  226555 pod_ready.go:81] duration metric: took 400.141646ms waiting for pod "kube-proxy-g94gj" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:54.084876  226555 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:54.281372  226555 request.go:628] Waited for 196.422302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:43:54.281452  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:43:54.281457  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:54.281466  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:54.281474  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:54.284421  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:54.284448  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:54.284456  226555 round_trippers.go:580]     Audit-Id: 1e17db7e-535b-4db3-b4f9-18208fcd4827
	I0817 21:43:54.284466  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:54.284475  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:54.284484  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:54.284493  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:54.284502  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:54 GMT
	I0817 21:43:54.284685  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zmldj","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac59040d-df0c-416f-9660-4a41f7b75789","resourceVersion":"519","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0817 21:43:54.480520  226555 request.go:628] Waited for 195.319495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:43:54.480598  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:43:54.480605  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:54.480616  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:54.480627  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:54.483437  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:54.483462  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:54.483469  226555 round_trippers.go:580]     Audit-Id: 77820810-40c7-4cc5-9428-77923811142a
	I0817 21:43:54.483475  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:54.483480  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:54.483486  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:54.483491  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:54.483499  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:54 GMT
	I0817 21:43:54.483608  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"744","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I0817 21:43:54.483921  226555 pod_ready.go:92] pod "kube-proxy-zmldj" in "kube-system" namespace has status "Ready":"True"
	I0817 21:43:54.483936  226555 pod_ready.go:81] duration metric: took 399.053541ms waiting for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:54.483948  226555 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:54.680365  226555 request.go:628] Waited for 196.341583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:43:54.680439  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:43:54.680444  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:54.680452  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:54.680458  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:54.683291  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:54.683316  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:54.683324  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:54.683330  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:54 GMT
	I0817 21:43:54.683336  226555 round_trippers.go:580]     Audit-Id: daf1326e-e799-4b18-946e-a832578f511a
	I0817 21:43:54.683344  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:54.683352  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:54.683359  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:54.683506  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-959371","namespace":"kube-system","uid":"a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2","resourceVersion":"786","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.mirror":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.seen":"2023-08-17T21:33:26.519087461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0817 21:43:54.881372  226555 request.go:628] Waited for 197.433816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:54.881481  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:54.881496  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:54.881508  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:54.881518  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:54.884630  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:54.884654  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:54.884662  226555 round_trippers.go:580]     Audit-Id: 30287b1f-4e0a-4b10-b79e-1165e0c3ba28
	I0817 21:43:54.884668  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:54.884675  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:54.884683  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:54.884691  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:54.884699  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:54 GMT
	I0817 21:43:54.884842  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"750","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0817 21:43:54.885212  226555 pod_ready.go:97] node "multinode-959371" hosting pod "kube-scheduler-multinode-959371" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:54.885230  226555 pod_ready.go:81] duration metric: took 401.275518ms waiting for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	E0817 21:43:54.885239  226555 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-959371" hosting pod "kube-scheduler-multinode-959371" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-959371" has status "Ready":"False"
	I0817 21:43:54.885247  226555 pod_ready.go:38] duration metric: took 1.684481109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:43:54.885290  226555 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 21:43:54.898465  226555 command_runner.go:130] > -16
	I0817 21:43:54.898886  226555 ops.go:34] apiserver oom_adj: -16
	I0817 21:43:54.898907  226555 kubeadm.go:640] restartCluster took 22.54642529s
	I0817 21:43:54.898918  226555 kubeadm.go:406] StartCluster complete in 22.592705695s
	I0817 21:43:54.898941  226555 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:43:54.899042  226555 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:43:54.899926  226555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:43:54.900260  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:43:54.900422  226555 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0817 21:43:54.900535  226555 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:43:54.903801  226555 out.go:177] * Enabled addons: 
	I0817 21:43:54.900621  226555 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:43:54.905470  226555 addons.go:502] enable addons completed in 5.068456ms: enabled=[]
	I0817 21:43:54.905657  226555 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:43:54.905989  226555 round_trippers.go:463] GET https://192.168.39.104:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:43:54.906000  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:54.906007  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:54.906013  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:54.908791  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:54.908817  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:54.908827  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:54 GMT
	I0817 21:43:54.908834  226555 round_trippers.go:580]     Audit-Id: 52a9534f-05de-42ab-ba65-80c98d1e7e3f
	I0817 21:43:54.908841  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:54.908849  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:54.908856  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:54.908863  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:54.908870  226555 round_trippers.go:580]     Content-Length: 291
	I0817 21:43:54.908898  226555 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15a0cefe-1964-4cfb-951e-96eec4cbbba6","resourceVersion":"832","creationTimestamp":"2023-08-17T21:33:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0817 21:43:54.909060  226555 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-959371" context rescaled to 1 replicas
	I0817 21:43:54.909090  226555 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:43:54.910869  226555 out.go:177] * Verifying Kubernetes components...
	I0817 21:43:54.912249  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:43:55.012793  226555 command_runner.go:130] > apiVersion: v1
	I0817 21:43:55.012831  226555 command_runner.go:130] > data:
	I0817 21:43:55.012838  226555 command_runner.go:130] >   Corefile: |
	I0817 21:43:55.012844  226555 command_runner.go:130] >     .:53 {
	I0817 21:43:55.012850  226555 command_runner.go:130] >         log
	I0817 21:43:55.012856  226555 command_runner.go:130] >         errors
	I0817 21:43:55.012862  226555 command_runner.go:130] >         health {
	I0817 21:43:55.012869  226555 command_runner.go:130] >            lameduck 5s
	I0817 21:43:55.012875  226555 command_runner.go:130] >         }
	I0817 21:43:55.012882  226555 command_runner.go:130] >         ready
	I0817 21:43:55.012891  226555 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0817 21:43:55.012899  226555 command_runner.go:130] >            pods insecure
	I0817 21:43:55.012908  226555 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0817 21:43:55.012915  226555 command_runner.go:130] >            ttl 30
	I0817 21:43:55.012921  226555 command_runner.go:130] >         }
	I0817 21:43:55.012928  226555 command_runner.go:130] >         prometheus :9153
	I0817 21:43:55.012934  226555 command_runner.go:130] >         hosts {
	I0817 21:43:55.012942  226555 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0817 21:43:55.012953  226555 command_runner.go:130] >            fallthrough
	I0817 21:43:55.012965  226555 command_runner.go:130] >         }
	I0817 21:43:55.012982  226555 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0817 21:43:55.012990  226555 command_runner.go:130] >            max_concurrent 1000
	I0817 21:43:55.012998  226555 command_runner.go:130] >         }
	I0817 21:43:55.013004  226555 command_runner.go:130] >         cache 30
	I0817 21:43:55.013013  226555 command_runner.go:130] >         loop
	I0817 21:43:55.013020  226555 command_runner.go:130] >         reload
	I0817 21:43:55.013030  226555 command_runner.go:130] >         loadbalance
	I0817 21:43:55.013038  226555 command_runner.go:130] >     }
	I0817 21:43:55.013048  226555 command_runner.go:130] > kind: ConfigMap
	I0817 21:43:55.013057  226555 command_runner.go:130] > metadata:
	I0817 21:43:55.013068  226555 command_runner.go:130] >   creationTimestamp: "2023-08-17T21:33:26Z"
	I0817 21:43:55.013078  226555 command_runner.go:130] >   name: coredns
	I0817 21:43:55.013087  226555 command_runner.go:130] >   namespace: kube-system
	I0817 21:43:55.013097  226555 command_runner.go:130] >   resourceVersion: "400"
	I0817 21:43:55.013108  226555 command_runner.go:130] >   uid: e9226e04-c717-47b9-9786-67441c6d4d26
	I0817 21:43:55.013213  226555 node_ready.go:35] waiting up to 6m0s for node "multinode-959371" to be "Ready" ...
	I0817 21:43:55.013273  226555 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 21:43:55.080578  226555 request.go:628] Waited for 67.238016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:55.080658  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:55.080664  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:55.080672  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:55.080679  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:55.083707  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:55.083743  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:55.083755  226555 round_trippers.go:580]     Audit-Id: 5f7dcca4-845d-4289-8b60-0e6da7b817bc
	I0817 21:43:55.083764  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:55.083773  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:55.083782  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:55.083788  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:55.083793  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:55 GMT
	I0817 21:43:55.085435  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"750","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0817 21:43:55.281347  226555 request.go:628] Waited for 195.461773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:55.281451  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:55.281462  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:55.281500  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:55.281515  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:55.284238  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:55.284265  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:55.284273  226555 round_trippers.go:580]     Audit-Id: b9ff6956-cc83-4c06-9953-0b2664d83b0f
	I0817 21:43:55.284278  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:55.284283  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:55.284289  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:55.284294  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:55.284300  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:55 GMT
	I0817 21:43:55.284518  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"750","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0817 21:43:55.785364  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:55.785389  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:55.785400  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:55.785408  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:55.790302  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:43:55.790333  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:55.790351  226555 round_trippers.go:580]     Audit-Id: b550d3be-daec-4d0f-809b-d17ce474ada3
	I0817 21:43:55.790361  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:55.790367  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:55.790372  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:55.790378  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:55.790383  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:55 GMT
	I0817 21:43:55.790611  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:55.791052  226555 node_ready.go:49] node "multinode-959371" has status "Ready":"True"
	I0817 21:43:55.791077  226555 node_ready.go:38] duration metric: took 777.835312ms waiting for node "multinode-959371" to be "Ready" ...
	I0817 21:43:55.791090  226555 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:43:55.791164  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:43:55.791174  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:55.791185  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:55.791194  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:55.800238  226555 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0817 21:43:55.800273  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:55.800285  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:55.800294  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:55.800313  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:55.800318  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:55.800324  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:55 GMT
	I0817 21:43:55.800329  226555 round_trippers.go:580]     Audit-Id: 269ad48e-d3d6-4086-8b86-0b2144af1406
	I0817 21:43:55.803456  226555 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"863"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82405 chars]
	I0817 21:43:55.807339  226555 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:55.880776  226555 request.go:628] Waited for 73.320348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:55.880859  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:55.880867  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:55.880878  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:55.880887  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:55.884243  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:55.884290  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:55.884302  226555 round_trippers.go:580]     Audit-Id: 02ec2657-1bb0-49b4-92b5-c2246d2fe529
	I0817 21:43:55.884310  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:55.884317  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:55.884326  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:55.884334  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:55.884342  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:55 GMT
	I0817 21:43:55.884535  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0817 21:43:56.080485  226555 request.go:628] Waited for 195.326818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:56.080550  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:56.080555  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:56.080563  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:56.080568  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:56.083519  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:56.083550  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:56.083561  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:56.083569  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:56.083578  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:56.083590  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:56 GMT
	I0817 21:43:56.083598  226555 round_trippers.go:580]     Audit-Id: 6fccc7c0-7993-4f36-ab43-44d85bd05d1b
	I0817 21:43:56.083610  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:56.083799  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:56.280649  226555 request.go:628] Waited for 196.42695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:56.280727  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:56.280732  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:56.280740  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:56.280747  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:56.283765  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:56.283788  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:56.283795  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:56.283804  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:56 GMT
	I0817 21:43:56.283813  226555 round_trippers.go:580]     Audit-Id: ea6d617c-8cb3-4209-aed7-011314313f92
	I0817 21:43:56.283822  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:56.283833  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:56.283847  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:56.284073  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0817 21:43:56.481076  226555 request.go:628] Waited for 196.384585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:56.481153  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:56.481159  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:56.481170  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:56.481184  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:56.483977  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:56.484001  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:56.484011  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:56.484020  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:56.484027  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:56.484032  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:56 GMT
	I0817 21:43:56.484038  226555 round_trippers.go:580]     Audit-Id: 0621b7a6-8c9a-4361-9c91-98158ccf31a8
	I0817 21:43:56.484044  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:56.484203  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:56.985421  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:56.985445  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:56.985454  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:56.985460  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:56.989336  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:56.989366  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:56.989375  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:56.989381  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:56 GMT
	I0817 21:43:56.989386  226555 round_trippers.go:580]     Audit-Id: 8a5f6dd3-75ed-4f4e-b5a9-976297d2bfe0
	I0817 21:43:56.989392  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:56.989397  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:56.989402  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:56.990442  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0817 21:43:56.991020  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:56.991035  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:56.991048  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:56.991058  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:56.993270  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:56.993293  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:56.993303  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:56.993312  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:56.993320  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:56.993328  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:56.993339  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:56 GMT
	I0817 21:43:56.993351  226555 round_trippers.go:580]     Audit-Id: a8f1bbf4-9475-4972-86fa-e0025fb4ae36
	I0817 21:43:56.993549  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:57.485221  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:57.485251  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:57.485260  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:57.485266  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:57.488592  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:57.488619  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:57.488627  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:57.488633  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:57.488638  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:57.488644  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:57.488650  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:57 GMT
	I0817 21:43:57.488655  226555 round_trippers.go:580]     Audit-Id: 1a013c87-8977-4f31-a812-028251771248
	I0817 21:43:57.489165  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0817 21:43:57.489708  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:57.489723  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:57.489731  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:57.489737  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:57.492295  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:57.492312  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:57.492319  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:57.492325  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:57 GMT
	I0817 21:43:57.492330  226555 round_trippers.go:580]     Audit-Id: 5e5631af-8a97-42ca-bc38-93693b633ad3
	I0817 21:43:57.492335  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:57.492340  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:57.492346  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:57.492517  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:57.985561  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:57.985586  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:57.985595  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:57.985601  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:57.988981  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:43:57.989003  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:57.989012  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:57.989018  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:57.989023  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:57.989028  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:57 GMT
	I0817 21:43:57.989034  226555 round_trippers.go:580]     Audit-Id: a5bffabb-bcca-4aff-86a1-94d790ec46f4
	I0817 21:43:57.989039  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:57.989441  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0817 21:43:57.989900  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:57.989915  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:57.989924  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:57.989931  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:57.992200  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:57.992223  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:57.992234  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:57.992242  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:57.992251  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:57 GMT
	I0817 21:43:57.992259  226555 round_trippers.go:580]     Audit-Id: 04e74ae6-53e1-46cf-9bc6-bb13437cd5f8
	I0817 21:43:57.992267  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:57.992289  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:57.992493  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:57.992920  226555 pod_ready.go:102] pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace has status "Ready":"False"
	I0817 21:43:58.485164  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:58.485196  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:58.485207  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:58.485214  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:58.489619  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:43:58.489646  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:58.489653  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:58.489661  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:58 GMT
	I0817 21:43:58.489672  226555 round_trippers.go:580]     Audit-Id: 26763209-c15c-450c-85ae-5c45debc9d13
	I0817 21:43:58.489680  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:58.489688  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:58.489695  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:58.489984  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0817 21:43:58.490502  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:58.490518  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:58.490526  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:58.490532  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:58.495261  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:43:58.495281  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:58.495288  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:58.495294  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:58.495299  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:58.495305  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:58.495313  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:58 GMT
	I0817 21:43:58.495322  226555 round_trippers.go:580]     Audit-Id: 2af10c9c-5c37-450f-ac15-6d914382b7f4
	I0817 21:43:58.495469  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:58.984878  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:58.984912  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:58.984925  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:58.984931  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:58.991909  226555 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0817 21:43:58.991936  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:58.991943  226555 round_trippers.go:580]     Audit-Id: 35d85e2e-5d90-4132-8ac9-636bf16d7d34
	I0817 21:43:58.991949  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:58.991955  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:58.991967  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:58.991978  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:58.991985  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:58 GMT
	I0817 21:43:58.992216  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0817 21:43:58.992857  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:58.992874  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:58.992886  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:58.992896  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:58.997496  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:43:58.997518  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:58.997529  226555 round_trippers.go:580]     Audit-Id: 8a8656ff-0a9a-45b9-9fba-d86b9e802763
	I0817 21:43:58.997538  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:58.997544  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:58.997552  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:58.997561  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:58.997570  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:58 GMT
	I0817 21:43:58.997698  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:59.485261  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:59.485303  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:59.485317  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:59.485327  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:59.488198  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:59.488221  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:59.488232  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:59.488240  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:59.488257  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:59 GMT
	I0817 21:43:59.488270  226555 round_trippers.go:580]     Audit-Id: 638e4bb0-5ee1-4905-b265-5ff69eaeb1c3
	I0817 21:43:59.488280  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:59.488290  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:59.488513  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"791","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0817 21:43:59.489109  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:59.489124  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:59.489136  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:59.489153  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:59.491658  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:59.491675  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:59.491685  226555 round_trippers.go:580]     Audit-Id: 0d24243a-1286-43d9-9e75-cd26b70324ca
	I0817 21:43:59.491693  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:59.491702  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:59.491717  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:59.491726  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:59.491739  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:59 GMT
	I0817 21:43:59.491907  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:59.985624  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:43:59.985660  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:59.985671  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:59.985680  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:59.988385  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:59.988409  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:59.988419  226555 round_trippers.go:580]     Audit-Id: 5feae156-b193-4cff-8762-a4206de9a5a8
	I0817 21:43:59.988429  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:59.988438  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:59.988445  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:59.988450  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:59.988458  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:59 GMT
	I0817 21:43:59.988681  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"872","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0817 21:43:59.989230  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:59.989246  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:59.989258  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:59.989268  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:59.991646  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:59.991665  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:59.991676  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:59.991686  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:59.991696  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:59.991706  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:59 GMT
	I0817 21:43:59.991721  226555 round_trippers.go:580]     Audit-Id: f86525d3-2be4-4c28-bc8a-9658df2ed78c
	I0817 21:43:59.991731  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:59.991928  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:43:59.992223  226555 pod_ready.go:92] pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace has status "Ready":"True"
	I0817 21:43:59.992236  226555 pod_ready.go:81] duration metric: took 4.184867496s waiting for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:59.992245  226555 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:43:59.992293  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-959371
	I0817 21:43:59.992299  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:59.992306  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:59.992315  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:43:59.994687  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:43:59.994704  226555 round_trippers.go:577] Response Headers:
	I0817 21:43:59.994714  226555 round_trippers.go:580]     Audit-Id: 4eb5d5c0-5472-4f60-97ec-dbbe0b3a70a8
	I0817 21:43:59.994723  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:43:59.994733  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:43:59.994755  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:43:59.994761  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:43:59.994767  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:59 GMT
	I0817 21:43:59.994944  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-959371","namespace":"kube-system","uid":"0ffe6db5-4285-4788-88b2-073753ece5f3","resourceVersion":"866","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.104:2379","kubernetes.io/config.hash":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.mirror":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.seen":"2023-08-17T21:33:26.519088298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0817 21:43:59.995350  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:43:59.995364  226555 round_trippers.go:469] Request Headers:
	I0817 21:43:59.995371  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:43:59.995377  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:00.001015  226555 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0817 21:44:00.001049  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:00.001056  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:00.001062  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:00.001067  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:00.001072  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:59 GMT
	I0817 21:44:00.001081  226555 round_trippers.go:580]     Audit-Id: 46df6876-25f7-48d5-b273-0e2c87057413
	I0817 21:44:00.001089  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:00.001250  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:00.001702  226555 pod_ready.go:92] pod "etcd-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:44:00.001722  226555 pod_ready.go:81] duration metric: took 9.471667ms waiting for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:00.001742  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:00.001802  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-959371
	I0817 21:44:00.001810  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:00.001817  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:00.001823  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:00.004310  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:00.004329  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:00.004335  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:43:59 GMT
	I0817 21:44:00.004343  226555 round_trippers.go:580]     Audit-Id: 3ad77539-28fa-4b0e-90a5-b1ceb734b4dd
	I0817 21:44:00.004352  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:00.004360  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:00.004368  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:00.004377  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:00.005207  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-959371","namespace":"kube-system","uid":"0efb1ae7-705a-47df-91c6-0d9390b68983","resourceVersion":"863","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.104:8443","kubernetes.io/config.hash":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.mirror":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.seen":"2023-08-17T21:33:26.519082064Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0817 21:44:00.080972  226555 request.go:628] Waited for 75.258037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:00.081051  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:00.081062  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:00.081076  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:00.081087  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:00.084003  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:00.084035  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:00.084047  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:00.084059  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:00.084069  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:00 GMT
	I0817 21:44:00.084081  226555 round_trippers.go:580]     Audit-Id: a408262b-478d-4301-8c4d-823348800582
	I0817 21:44:00.084092  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:00.084104  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:00.084225  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:00.084601  226555 pod_ready.go:92] pod "kube-apiserver-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:44:00.084617  226555 pod_ready.go:81] duration metric: took 82.868631ms waiting for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:00.084627  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:00.281135  226555 request.go:628] Waited for 196.432462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:00.281228  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:00.281235  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:00.281247  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:00.281262  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:00.284176  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:00.284200  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:00.284214  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:00.284220  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:00.284225  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:00.284230  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:00 GMT
	I0817 21:44:00.284236  226555 round_trippers.go:580]     Audit-Id: db7e2c6c-adce-473e-890b-a302bad786ba
	I0817 21:44:00.284241  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:00.284462  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:44:00.480468  226555 request.go:628] Waited for 195.303629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:00.480527  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:00.480532  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:00.480540  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:00.480551  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:00.483382  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:00.483403  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:00.483410  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:00.483417  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:00 GMT
	I0817 21:44:00.483422  226555 round_trippers.go:580]     Audit-Id: 3890173f-4766-479f-b9e2-7cd1ea2484b3
	I0817 21:44:00.483427  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:00.483435  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:00.483444  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:00.483564  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:00.680320  226555 request.go:628] Waited for 196.317398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:00.680411  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:00.680419  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:00.680430  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:00.680442  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:00.683759  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:00.683784  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:00.683792  226555 round_trippers.go:580]     Audit-Id: bed58e2d-4a77-4ac8-b8be-1d98539b2bb2
	I0817 21:44:00.683798  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:00.683803  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:00.683808  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:00.683814  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:00.683822  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:00 GMT
	I0817 21:44:00.683943  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:44:00.880735  226555 request.go:628] Waited for 196.254696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:00.880818  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:00.880835  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:00.880846  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:00.880854  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:00.883680  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:00.883751  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:00.883772  226555 round_trippers.go:580]     Audit-Id: c3de5581-b84c-4636-a9fa-24d5a01437d7
	I0817 21:44:00.883782  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:00.883797  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:00.883809  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:00.883819  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:00.883827  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:00 GMT
	I0817 21:44:00.884032  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:01.385252  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:01.385281  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:01.385290  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:01.385296  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:01.388188  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:01.388216  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:01.388227  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:01.388235  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:01 GMT
	I0817 21:44:01.388242  226555 round_trippers.go:580]     Audit-Id: b7a745c1-1535-4d56-81db-e5e28313c29a
	I0817 21:44:01.388250  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:01.388258  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:01.388266  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:01.388415  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:44:01.388979  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:01.388996  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:01.389007  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:01.389019  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:01.391174  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:01.391189  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:01.391197  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:01.391203  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:01.391212  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:01 GMT
	I0817 21:44:01.391220  226555 round_trippers.go:580]     Audit-Id: a8446104-9493-4981-9f07-48970893d70d
	I0817 21:44:01.391233  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:01.391241  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:01.391530  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:01.885310  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:01.885341  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:01.885351  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:01.885359  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:01.889264  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:01.889294  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:01.889302  226555 round_trippers.go:580]     Audit-Id: 7c2ce1cb-3600-4dfd-b3b2-b0e9f7521fdb
	I0817 21:44:01.889308  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:01.889314  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:01.889319  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:01.889327  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:01.889335  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:01 GMT
	I0817 21:44:01.889499  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:44:01.890094  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:01.890110  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:01.890119  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:01.890132  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:01.892943  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:01.892966  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:01.892977  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:01.892986  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:01.892994  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:01 GMT
	I0817 21:44:01.893000  226555 round_trippers.go:580]     Audit-Id: 0f7f9a22-3772-4a3d-860b-e001befdf0e5
	I0817 21:44:01.893008  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:01.893016  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:01.893130  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:02.385549  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:02.385579  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:02.385591  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:02.385599  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:02.389798  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:44:02.389824  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:02.389835  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:02.389841  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:02.389847  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:02.389852  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:02.389862  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:02 GMT
	I0817 21:44:02.389869  226555 round_trippers.go:580]     Audit-Id: 204653d3-d6c7-458c-87ce-fa613251246a
	I0817 21:44:02.390017  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:44:02.390523  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:02.390539  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:02.390547  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:02.390553  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:02.394775  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:44:02.394796  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:02.394806  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:02.394813  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:02 GMT
	I0817 21:44:02.394821  226555 round_trippers.go:580]     Audit-Id: 0d624c01-1be4-4e80-91a3-e09d53dbeb6c
	I0817 21:44:02.394829  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:02.394837  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:02.394848  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:02.394954  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:02.395361  226555 pod_ready.go:102] pod "kube-controller-manager-multinode-959371" in "kube-system" namespace has status "Ready":"False"
	I0817 21:44:02.884831  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:02.884860  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:02.884873  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:02.884880  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:02.888382  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:02.888412  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:02.888421  226555 round_trippers.go:580]     Audit-Id: f87f9893-d626-4439-ae2f-950b28f0ec57
	I0817 21:44:02.888431  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:02.888440  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:02.888448  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:02.888457  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:02.888464  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:02 GMT
	I0817 21:44:02.889070  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:44:02.889515  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:02.889525  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:02.889533  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:02.889538  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:02.892879  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:02.892897  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:02.892906  226555 round_trippers.go:580]     Audit-Id: 7e497edb-5cb1-447a-842b-9829ba39cc1e
	I0817 21:44:02.892912  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:02.892918  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:02.892923  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:02.892928  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:02.892934  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:02 GMT
	I0817 21:44:02.893617  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:03.385369  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:03.385394  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:03.385403  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:03.385410  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:03.390838  226555 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0817 21:44:03.390862  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:03.390869  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:03.390875  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:03.390881  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:03.390886  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:03 GMT
	I0817 21:44:03.390892  226555 round_trippers.go:580]     Audit-Id: 1a932516-7a48-4412-8f0d-f83c8266ee1a
	I0817 21:44:03.390897  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:03.391511  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:44:03.391936  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:03.391948  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:03.391956  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:03.391962  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:03.395783  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:03.395805  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:03.395812  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:03.395818  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:03 GMT
	I0817 21:44:03.395834  226555 round_trippers.go:580]     Audit-Id: a9e30087-c99e-4494-a94a-a5f09ce36227
	I0817 21:44:03.395840  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:03.395845  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:03.395854  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:03.396679  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:03.885354  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:03.885387  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:03.885395  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:03.885402  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:03.888731  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:03.888753  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:03.888760  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:03.888766  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:03.888771  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:03 GMT
	I0817 21:44:03.888776  226555 round_trippers.go:580]     Audit-Id: e4a10092-0640-40c0-85cf-d80947c9f28e
	I0817 21:44:03.888782  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:03.888787  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:03.888970  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:44:03.889422  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:03.889434  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:03.889442  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:03.889448  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:03.892113  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:03.892133  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:03.892140  226555 round_trippers.go:580]     Audit-Id: 57b889f7-14c5-4eaa-8117-1c8a969d4bc6
	I0817 21:44:03.892146  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:03.892153  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:03.892161  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:03.892170  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:03.892177  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:03 GMT
	I0817 21:44:03.892266  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:04.384895  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:04.384921  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:04.384931  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:04.384937  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:04.388077  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:04.388111  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:04.388122  226555 round_trippers.go:580]     Audit-Id: d769b89a-f0fb-4254-a8e5-5294b939f934
	I0817 21:44:04.388131  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:04.388140  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:04.388146  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:04.388153  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:04.388162  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:04 GMT
	I0817 21:44:04.388358  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"792","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0817 21:44:04.388810  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:04.388825  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:04.388832  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:04.388842  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:04.391075  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:04.391092  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:04.391099  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:04.391105  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:04.391111  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:04.391120  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:04 GMT
	I0817 21:44:04.391126  226555 round_trippers.go:580]     Audit-Id: 946df44b-5522-4f44-a654-62941d122c4a
	I0817 21:44:04.391134  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:04.391378  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:04.885033  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:44:04.885071  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:04.885079  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:04.885086  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:04.888256  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:04.888302  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:04.888314  226555 round_trippers.go:580]     Audit-Id: 5872e06c-82a9-4661-8300-b5773a9c4a48
	I0817 21:44:04.888324  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:04.888332  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:04.888341  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:04.888350  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:04.888359  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:04 GMT
	I0817 21:44:04.888516  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"892","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0817 21:44:04.889107  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:04.889131  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:04.889143  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:04.889153  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:04.891869  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:04.891893  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:04.891903  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:04.891917  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:04.891924  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:04.891931  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:04.891939  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:04 GMT
	I0817 21:44:04.891947  226555 round_trippers.go:580]     Audit-Id: 544bdb8b-b32e-4bd9-974d-e83cc3649888
	I0817 21:44:04.892661  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:04.893019  226555 pod_ready.go:92] pod "kube-controller-manager-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:44:04.893037  226555 pod_ready.go:81] duration metric: took 4.808402753s waiting for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:04.893052  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:04.893115  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:44:04.893126  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:04.893137  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:04.893147  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:04.897369  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:44:04.897396  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:04.897407  226555 round_trippers.go:580]     Audit-Id: 1f29c18e-4715-4434-b6e0-7e8a12d28eaa
	I0817 21:44:04.897415  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:04.897424  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:04.897431  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:04.897438  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:04.897446  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:04 GMT
	I0817 21:44:04.897556  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gdf7","generateName":"kube-proxy-","namespace":"kube-system","uid":"00e6f433-51d6-49bb-a927-780720361eb3","resourceVersion":"831","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0817 21:44:04.898078  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:04.898097  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:04.898107  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:04.898118  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:04.902957  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:44:04.902983  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:04.902991  226555 round_trippers.go:580]     Audit-Id: 678e74d6-2d4a-4c84-b627-9b2379cfe5da
	I0817 21:44:04.902997  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:04.903002  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:04.903008  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:04.903013  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:04.903019  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:04 GMT
	I0817 21:44:04.903136  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:04.903537  226555 pod_ready.go:92] pod "kube-proxy-8gdf7" in "kube-system" namespace has status "Ready":"True"
	I0817 21:44:04.903554  226555 pod_ready.go:81] duration metric: took 10.495499ms waiting for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:04.903565  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g94gj" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:04.903638  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g94gj
	I0817 21:44:04.903648  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:04.903659  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:04.903673  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:04.905995  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:04.906022  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:04.906029  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:04.906036  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:04 GMT
	I0817 21:44:04.906041  226555 round_trippers.go:580]     Audit-Id: d0ec9eba-794f-4afd-9b04-745e865d49d4
	I0817 21:44:04.906047  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:04.906068  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:04.906084  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:04.906190  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g94gj","generateName":"kube-proxy-","namespace":"kube-system","uid":"050b1eab-a69f-4f6f-b3b8-f29ef38c9042","resourceVersion":"719","creationTimestamp":"2023-08-17T21:35:12Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:35:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0817 21:44:04.906679  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:44:04.906696  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:04.906707  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:04.906717  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:04.909894  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:04.909913  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:04.909920  226555 round_trippers.go:580]     Audit-Id: dca72b94-ed63-435f-b20f-a2b6a3e98e90
	I0817 21:44:04.909928  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:04.909936  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:04.909943  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:04.909954  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:04.909965  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:04 GMT
	I0817 21:44:04.910126  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m03","uid":"31bc1a59-dff0-4542-804e-a9c019ecd2f4","resourceVersion":"889","creationTimestamp":"2023-08-17T21:35:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:35:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0817 21:44:04.910445  226555 pod_ready.go:92] pod "kube-proxy-g94gj" in "kube-system" namespace has status "Ready":"True"
	I0817 21:44:04.910460  226555 pod_ready.go:81] duration metric: took 6.887472ms waiting for pod "kube-proxy-g94gj" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:04.910477  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:05.080951  226555 request.go:628] Waited for 170.392043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:44:05.081108  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:44:05.081151  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:05.081166  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:05.081206  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:05.085334  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:44:05.085355  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:05.085362  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:05.085370  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:05.085380  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:05.085393  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:05.085401  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:05 GMT
	I0817 21:44:05.085410  226555 round_trippers.go:580]     Audit-Id: 8f64a163-747b-4339-9190-4ff0d951018e
	I0817 21:44:05.085796  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zmldj","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac59040d-df0c-416f-9660-4a41f7b75789","resourceVersion":"519","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0817 21:44:05.280830  226555 request.go:628] Waited for 194.427073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:44:05.280896  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:44:05.280905  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:05.280920  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:05.280933  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:05.283809  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:05.283834  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:05.283845  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:05.283852  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:05.283860  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:05 GMT
	I0817 21:44:05.283868  226555 round_trippers.go:580]     Audit-Id: 3d2318e2-81fb-4dfa-a1bd-194a96e10981
	I0817 21:44:05.283881  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:05.283895  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:05.283988  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f","resourceVersion":"744","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I0817 21:44:05.284285  226555 pod_ready.go:92] pod "kube-proxy-zmldj" in "kube-system" namespace has status "Ready":"True"
	I0817 21:44:05.284301  226555 pod_ready.go:81] duration metric: took 373.812968ms waiting for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:05.284315  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:05.480808  226555 request.go:628] Waited for 196.388933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:44:05.480939  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:44:05.480950  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:05.480958  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:05.480964  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:05.483872  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:44:05.483896  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:05.483909  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:05.483918  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:05.483926  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:05 GMT
	I0817 21:44:05.483934  226555 round_trippers.go:580]     Audit-Id: 892e2deb-7d7d-4080-bf85-29b932f30c4f
	I0817 21:44:05.483943  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:05.483957  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:05.484050  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-959371","namespace":"kube-system","uid":"a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2","resourceVersion":"882","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.mirror":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.seen":"2023-08-17T21:33:26.519087461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0817 21:44:05.681005  226555 request.go:628] Waited for 196.427953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:05.681069  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:44:05.681076  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:05.681086  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:05.681095  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:05.684875  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:05.684903  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:05.684914  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:05.684922  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:05.684929  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:05.684937  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:05 GMT
	I0817 21:44:05.684945  226555 round_trippers.go:580]     Audit-Id: a6c480de-335d-4d7c-8304-c908f4b2b9bb
	I0817 21:44:05.684952  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:05.685071  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0817 21:44:05.685433  226555 pod_ready.go:92] pod "kube-scheduler-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:44:05.685454  226555 pod_ready.go:81] duration metric: took 401.126386ms waiting for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:44:05.685469  226555 pod_ready.go:38] duration metric: took 9.894367056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:44:05.685491  226555 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:44:05.685553  226555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:44:05.705431  226555 command_runner.go:130] > 1109
	I0817 21:44:05.705526  226555 api_server.go:72] duration metric: took 10.79641082s to wait for apiserver process to appear ...
	I0817 21:44:05.705542  226555 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:44:05.705565  226555 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0817 21:44:05.711053  226555 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0817 21:44:05.711129  226555 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I0817 21:44:05.711141  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:05.711152  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:05.711166  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:05.712608  226555 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:44:05.712645  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:05.712657  226555 round_trippers.go:580]     Audit-Id: 1a88e836-3750-4e53-a4ec-349e7c3f7ca7
	I0817 21:44:05.712666  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:05.712675  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:05.712684  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:05.712693  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:05.712703  226555 round_trippers.go:580]     Content-Length: 263
	I0817 21:44:05.712712  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:05 GMT
	I0817 21:44:05.712764  226555 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0817 21:44:05.712829  226555 api_server.go:141] control plane version: v1.27.4
	I0817 21:44:05.712846  226555 api_server.go:131] duration metric: took 7.297008ms to wait for apiserver health ...
	I0817 21:44:05.712871  226555 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:44:05.880431  226555 request.go:628] Waited for 167.45694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:44:05.880509  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:44:05.880514  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:05.880523  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:05.880530  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:05.886184  226555 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0817 21:44:05.886208  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:05.886226  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:05.886237  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:05.886245  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:05.886255  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:05 GMT
	I0817 21:44:05.886263  226555 round_trippers.go:580]     Audit-Id: 33f682f5-f8de-494f-8346-ef4d25677066
	I0817 21:44:05.886271  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:05.888118  226555 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"892"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"872","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81889 chars]
	I0817 21:44:05.891861  226555 system_pods.go:59] 12 kube-system pods found
	I0817 21:44:05.891900  226555 system_pods.go:61] "coredns-5d78c9869d-87rlb" [52da85e0-72f0-4919-8615-d1cb46b65ca4] Running
	I0817 21:44:05.891909  226555 system_pods.go:61] "etcd-multinode-959371" [0ffe6db5-4285-4788-88b2-073753ece5f3] Running
	I0817 21:44:05.891920  226555 system_pods.go:61] "kindnet-cmxkw" [0118d29d-3f4f-460d-b5ab-653c3b98d7fa] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 21:44:05.891929  226555 system_pods.go:61] "kindnet-s7l7j" [6af177c8-cc30-4a86-98d8-443cef5036d8] Running
	I0817 21:44:05.891939  226555 system_pods.go:61] "kindnet-xjn26" [78b21525-477a-49fa-8fb9-12ba1f58c418] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 21:44:05.891946  226555 system_pods.go:61] "kube-apiserver-multinode-959371" [0efb1ae7-705a-47df-91c6-0d9390b68983] Running
	I0817 21:44:05.891954  226555 system_pods.go:61] "kube-controller-manager-multinode-959371" [00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f] Running
	I0817 21:44:05.891961  226555 system_pods.go:61] "kube-proxy-8gdf7" [00e6f433-51d6-49bb-a927-780720361eb3] Running
	I0817 21:44:05.891967  226555 system_pods.go:61] "kube-proxy-g94gj" [050b1eab-a69f-4f6f-b3b8-f29ef38c9042] Running
	I0817 21:44:05.891975  226555 system_pods.go:61] "kube-proxy-zmldj" [ac59040d-df0c-416f-9660-4a41f7b75789] Running
	I0817 21:44:05.891980  226555 system_pods.go:61] "kube-scheduler-multinode-959371" [a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2] Running
	I0817 21:44:05.891990  226555 system_pods.go:61] "storage-provisioner" [e8aa1192-3588-49da-be88-15a801d006fc] Running
	I0817 21:44:05.891997  226555 system_pods.go:74] duration metric: took 179.116044ms to wait for pod list to return data ...
	I0817 21:44:05.892019  226555 default_sa.go:34] waiting for default service account to be created ...
	I0817 21:44:06.080413  226555 request.go:628] Waited for 188.304062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I0817 21:44:06.080518  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I0817 21:44:06.080530  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:06.080538  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:06.080545  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:06.083834  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:06.083861  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:06.083871  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:06.083879  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:06.083886  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:06.083895  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:06.083905  226555 round_trippers.go:580]     Content-Length: 261
	I0817 21:44:06.083915  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:06 GMT
	I0817 21:44:06.083925  226555 round_trippers.go:580]     Audit-Id: de9f0ed7-42f2-40da-9a77-35d58190de88
	I0817 21:44:06.084008  226555 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"892"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c7ddc132-1c40-459d-89b5-903ee7cd5edc","resourceVersion":"343","creationTimestamp":"2023-08-17T21:33:38Z"}}]}
	I0817 21:44:06.084269  226555 default_sa.go:45] found service account: "default"
	I0817 21:44:06.084286  226555 default_sa.go:55] duration metric: took 192.260959ms for default service account to be created ...
	I0817 21:44:06.084296  226555 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 21:44:06.280783  226555 request.go:628] Waited for 196.391203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:44:06.280873  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:44:06.280880  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:06.280892  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:06.280902  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:06.285772  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:44:06.285806  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:06.285820  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:06.285828  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:06.285836  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:06 GMT
	I0817 21:44:06.285845  226555 round_trippers.go:580]     Audit-Id: 177702c1-2237-4898-9fab-d6f75a9acf7b
	I0817 21:44:06.285853  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:06.285862  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:06.288160  226555 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"892"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"872","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81889 chars]
	I0817 21:44:06.290751  226555 system_pods.go:86] 12 kube-system pods found
	I0817 21:44:06.290777  226555 system_pods.go:89] "coredns-5d78c9869d-87rlb" [52da85e0-72f0-4919-8615-d1cb46b65ca4] Running
	I0817 21:44:06.290783  226555 system_pods.go:89] "etcd-multinode-959371" [0ffe6db5-4285-4788-88b2-073753ece5f3] Running
	I0817 21:44:06.290790  226555 system_pods.go:89] "kindnet-cmxkw" [0118d29d-3f4f-460d-b5ab-653c3b98d7fa] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 21:44:06.290796  226555 system_pods.go:89] "kindnet-s7l7j" [6af177c8-cc30-4a86-98d8-443cef5036d8] Running
	I0817 21:44:06.290803  226555 system_pods.go:89] "kindnet-xjn26" [78b21525-477a-49fa-8fb9-12ba1f58c418] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 21:44:06.290807  226555 system_pods.go:89] "kube-apiserver-multinode-959371" [0efb1ae7-705a-47df-91c6-0d9390b68983] Running
	I0817 21:44:06.290813  226555 system_pods.go:89] "kube-controller-manager-multinode-959371" [00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f] Running
	I0817 21:44:06.290817  226555 system_pods.go:89] "kube-proxy-8gdf7" [00e6f433-51d6-49bb-a927-780720361eb3] Running
	I0817 21:44:06.290821  226555 system_pods.go:89] "kube-proxy-g94gj" [050b1eab-a69f-4f6f-b3b8-f29ef38c9042] Running
	I0817 21:44:06.290824  226555 system_pods.go:89] "kube-proxy-zmldj" [ac59040d-df0c-416f-9660-4a41f7b75789] Running
	I0817 21:44:06.290828  226555 system_pods.go:89] "kube-scheduler-multinode-959371" [a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2] Running
	I0817 21:44:06.290832  226555 system_pods.go:89] "storage-provisioner" [e8aa1192-3588-49da-be88-15a801d006fc] Running
	I0817 21:44:06.290838  226555 system_pods.go:126] duration metric: took 206.530428ms to wait for k8s-apps to be running ...
	I0817 21:44:06.290848  226555 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:44:06.290894  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:44:06.305321  226555 system_svc.go:56] duration metric: took 14.460768ms WaitForService to wait for kubelet.
	I0817 21:44:06.305353  226555 kubeadm.go:581] duration metric: took 11.396239121s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:44:06.305382  226555 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:44:06.480880  226555 request.go:628] Waited for 175.41398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I0817 21:44:06.480959  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I0817 21:44:06.480964  226555 round_trippers.go:469] Request Headers:
	I0817 21:44:06.480972  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:44:06.480978  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:44:06.484698  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:44:06.484734  226555 round_trippers.go:577] Response Headers:
	I0817 21:44:06.484746  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:44:06.484755  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:44:06.484764  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:44:06.484772  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:44:06 GMT
	I0817 21:44:06.484779  226555 round_trippers.go:580]     Audit-Id: 2e851857-de0d-44ef-8be1-eeb36aa05b37
	I0817 21:44:06.484788  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:44:06.485179  226555 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"892"},"items":[{"metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"861","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15076 chars]
	I0817 21:44:06.485828  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:44:06.485852  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:44:06.485872  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:44:06.485881  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:44:06.485885  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:44:06.485889  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:44:06.485893  226555 node_conditions.go:105] duration metric: took 180.507111ms to run NodePressure ...
	I0817 21:44:06.485908  226555 start.go:228] waiting for startup goroutines ...
	I0817 21:44:06.485917  226555 start.go:233] waiting for cluster config update ...
	I0817 21:44:06.485925  226555 start.go:242] writing updated cluster config ...
	I0817 21:44:06.486447  226555 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:44:06.486547  226555 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:44:06.490662  226555 out.go:177] * Starting worker node multinode-959371-m02 in cluster multinode-959371
	I0817 21:44:06.492226  226555 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:44:06.492263  226555 cache.go:57] Caching tarball of preloaded images
	I0817 21:44:06.492415  226555 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 21:44:06.492429  226555 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:44:06.492551  226555 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:44:06.492762  226555 start.go:365] acquiring machines lock for multinode-959371-m02: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 21:44:06.492811  226555 start.go:369] acquired machines lock for "multinode-959371-m02" in 27.068µs
	I0817 21:44:06.492825  226555 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:44:06.492833  226555 fix.go:54] fixHost starting: m02
	I0817 21:44:06.493116  226555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:44:06.493149  226555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:44:06.508505  226555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0817 21:44:06.509039  226555 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:44:06.509607  226555 main.go:141] libmachine: Using API Version  1
	I0817 21:44:06.509633  226555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:44:06.509985  226555 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:44:06.510209  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:44:06.510362  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetState
	I0817 21:44:06.512261  226555 fix.go:102] recreateIfNeeded on multinode-959371-m02: state=Running err=<nil>
	W0817 21:44:06.512280  226555 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:44:06.514773  226555 out.go:177] * Updating the running kvm2 "multinode-959371-m02" VM ...
	I0817 21:44:06.516675  226555 machine.go:88] provisioning docker machine ...
	I0817 21:44:06.516709  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:44:06.517078  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetMachineName
	I0817 21:44:06.517249  226555 buildroot.go:166] provisioning hostname "multinode-959371-m02"
	I0817 21:44:06.517266  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetMachineName
	I0817 21:44:06.517393  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:44:06.520193  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.520609  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:44:06.520645  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.520789  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:44:06.521026  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:44:06.521169  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:44:06.521299  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:44:06.521432  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:44:06.521844  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:44:06.521863  226555 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-959371-m02 && echo "multinode-959371-m02" | sudo tee /etc/hostname
	I0817 21:44:06.666832  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-959371-m02
	
	I0817 21:44:06.666870  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:44:06.670167  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.670670  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:44:06.670706  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.670892  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:44:06.671102  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:44:06.671317  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:44:06.671504  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:44:06.671724  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:44:06.672195  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:44:06.672215  226555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-959371-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-959371-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-959371-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:44:06.799665  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:44:06.799702  226555 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 21:44:06.799729  226555 buildroot.go:174] setting up certificates
	I0817 21:44:06.799742  226555 provision.go:83] configureAuth start
	I0817 21:44:06.799755  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetMachineName
	I0817 21:44:06.800198  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetIP
	I0817 21:44:06.803009  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.803366  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:44:06.803400  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.803589  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:44:06.805826  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.806152  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:44:06.806185  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.806334  226555 provision.go:138] copyHostCerts
	I0817 21:44:06.806367  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:44:06.806403  226555 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 21:44:06.806411  226555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:44:06.806484  226555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 21:44:06.806559  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:44:06.806576  226555 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 21:44:06.806583  226555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:44:06.806606  226555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 21:44:06.806670  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:44:06.806703  226555 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 21:44:06.806712  226555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:44:06.806745  226555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 21:44:06.806812  226555 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.multinode-959371-m02 san=[192.168.39.175 192.168.39.175 localhost 127.0.0.1 minikube multinode-959371-m02]
	I0817 21:44:06.899177  226555 provision.go:172] copyRemoteCerts
	I0817 21:44:06.899259  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:44:06.899295  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:44:06.902501  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.902819  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:44:06.902857  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:06.903084  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:44:06.903312  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:44:06.903507  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:44:06.903638  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa Username:docker}
	I0817 21:44:06.995714  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:44:06.995813  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 21:44:07.022119  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:44:07.022225  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0817 21:44:07.047298  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:44:07.047398  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:44:07.072725  226555 provision.go:86] duration metric: configureAuth took 272.967811ms
	I0817 21:44:07.072761  226555 buildroot.go:189] setting minikube options for container-runtime
	I0817 21:44:07.073025  226555 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:44:07.073155  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:44:07.076309  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:07.076780  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:44:07.076836  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:44:07.077111  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:44:07.077350  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:44:07.077519  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:44:07.077670  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:44:07.077911  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:44:07.078359  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:44:07.078382  226555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:45:37.775600  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:45:37.775636  226555 machine.go:91] provisioned docker machine in 1m31.258939958s
	I0817 21:45:37.775650  226555 start.go:300] post-start starting for "multinode-959371-m02" (driver="kvm2")
	I0817 21:45:37.775681  226555 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:45:37.775744  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:45:37.776309  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:45:37.776353  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:45:37.779323  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:37.779840  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:45:37.779878  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:37.780034  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:45:37.780242  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:45:37.780407  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:45:37.780578  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa Username:docker}
	I0817 21:45:37.877170  226555 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:45:37.881653  226555 command_runner.go:130] > NAME=Buildroot
	I0817 21:45:37.881675  226555 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0817 21:45:37.881680  226555 command_runner.go:130] > ID=buildroot
	I0817 21:45:37.881685  226555 command_runner.go:130] > VERSION_ID=2021.02.12
	I0817 21:45:37.881707  226555 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0817 21:45:37.881742  226555 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 21:45:37.881758  226555 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 21:45:37.881830  226555 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 21:45:37.881958  226555 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 21:45:37.881967  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /etc/ssl/certs/2106702.pem
	I0817 21:45:37.882076  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:45:37.890724  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:45:37.915195  226555 start.go:303] post-start completed in 139.526767ms
	I0817 21:45:37.915226  226555 fix.go:56] fixHost completed within 1m31.422394266s
	I0817 21:45:37.915286  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:45:37.917872  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:37.918380  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:45:37.918415  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:37.918603  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:45:37.918838  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:45:37.919022  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:45:37.919128  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:45:37.919326  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:45:37.919842  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0817 21:45:37.919857  226555 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 21:45:38.047287  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692308738.040824461
	
	I0817 21:45:38.047312  226555 fix.go:206] guest clock: 1692308738.040824461
	I0817 21:45:38.047323  226555 fix.go:219] Guest: 2023-08-17 21:45:38.040824461 +0000 UTC Remote: 2023-08-17 21:45:37.915231749 +0000 UTC m=+450.360988253 (delta=125.592712ms)
	I0817 21:45:38.047344  226555 fix.go:190] guest clock delta is within tolerance: 125.592712ms
	I0817 21:45:38.047349  226555 start.go:83] releasing machines lock for "multinode-959371-m02", held for 1m31.554528228s
	I0817 21:45:38.047411  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:45:38.047708  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetIP
	I0817 21:45:38.051130  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:38.051499  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:45:38.051526  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:38.053829  226555 out.go:177] * Found network options:
	I0817 21:45:38.055671  226555 out.go:177]   - NO_PROXY=192.168.39.104
	W0817 21:45:38.057171  226555 proxy.go:119] fail to check proxy env: Error ip not in block
	I0817 21:45:38.057247  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:45:38.057993  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:45:38.058219  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:45:38.058328  226555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:45:38.058374  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	W0817 21:45:38.058442  226555 proxy.go:119] fail to check proxy env: Error ip not in block
	I0817 21:45:38.058548  226555 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:45:38.058570  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:45:38.061253  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:38.061367  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:38.061728  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:45:38.061764  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:45:38.061787  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:38.061801  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:38.061931  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:45:38.062075  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:45:38.062147  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:45:38.062262  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:45:38.062339  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:45:38.062449  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:45:38.062623  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa Username:docker}
	I0817 21:45:38.062625  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa Username:docker}
	I0817 21:45:38.177317  226555 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0817 21:45:38.303222  226555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:45:38.309079  226555 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0817 21:45:38.309123  226555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 21:45:38.309187  226555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:45:38.318044  226555 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0817 21:45:38.318088  226555 start.go:466] detecting cgroup driver to use...
	I0817 21:45:38.318177  226555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:45:38.332217  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:45:38.346680  226555 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:45:38.346773  226555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:45:38.361074  226555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:45:38.375494  226555 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:45:38.531534  226555 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:45:38.672185  226555 docker.go:212] disabling docker service ...
	I0817 21:45:38.672276  226555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:45:38.688681  226555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:45:38.701954  226555 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:45:38.833111  226555 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:45:38.960665  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:45:38.974900  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:45:38.994166  226555 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0817 21:45:38.994212  226555 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 21:45:38.994272  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:45:39.004807  226555 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:45:39.004885  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:45:39.015333  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:45:39.025645  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:45:39.035524  226555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:45:39.045744  226555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:45:39.054640  226555 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0817 21:45:39.054788  226555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:45:39.063513  226555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:45:39.199331  226555 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:45:39.424683  226555 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:45:39.424772  226555 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:45:39.430146  226555 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0817 21:45:39.430189  226555 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0817 21:45:39.430200  226555 command_runner.go:130] > Device: 16h/22d	Inode: 1197        Links: 1
	I0817 21:45:39.430211  226555 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:45:39.430220  226555 command_runner.go:130] > Access: 2023-08-17 21:45:39.347753856 +0000
	I0817 21:45:39.430242  226555 command_runner.go:130] > Modify: 2023-08-17 21:45:39.347753856 +0000
	I0817 21:45:39.430251  226555 command_runner.go:130] > Change: 2023-08-17 21:45:39.347753856 +0000
	I0817 21:45:39.430256  226555 command_runner.go:130] >  Birth: -
	I0817 21:45:39.430288  226555 start.go:534] Will wait 60s for crictl version
	I0817 21:45:39.430355  226555 ssh_runner.go:195] Run: which crictl
	I0817 21:45:39.434751  226555 command_runner.go:130] > /usr/bin/crictl
	I0817 21:45:39.434838  226555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:45:39.472435  226555 command_runner.go:130] > Version:  0.1.0
	I0817 21:45:39.472462  226555 command_runner.go:130] > RuntimeName:  cri-o
	I0817 21:45:39.472469  226555 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0817 21:45:39.472479  226555 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0817 21:45:39.473643  226555 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 21:45:39.473752  226555 ssh_runner.go:195] Run: crio --version
	I0817 21:45:39.522779  226555 command_runner.go:130] > crio version 1.24.1
	I0817 21:45:39.522807  226555 command_runner.go:130] > Version:          1.24.1
	I0817 21:45:39.522817  226555 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:45:39.522824  226555 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:45:39.522833  226555 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:45:39.522839  226555 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:45:39.522845  226555 command_runner.go:130] > Compiler:         gc
	I0817 21:45:39.522851  226555 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:45:39.522858  226555 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:45:39.522869  226555 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:45:39.522876  226555 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:45:39.522887  226555 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:45:39.524298  226555 ssh_runner.go:195] Run: crio --version
	I0817 21:45:39.576260  226555 command_runner.go:130] > crio version 1.24.1
	I0817 21:45:39.576290  226555 command_runner.go:130] > Version:          1.24.1
	I0817 21:45:39.576323  226555 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:45:39.576331  226555 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:45:39.576345  226555 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:45:39.576353  226555 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:45:39.576360  226555 command_runner.go:130] > Compiler:         gc
	I0817 21:45:39.576370  226555 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:45:39.576378  226555 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:45:39.576392  226555 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:45:39.576401  226555 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:45:39.576407  226555 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:45:39.580969  226555 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 21:45:39.582542  226555 out.go:177]   - env NO_PROXY=192.168.39.104
	I0817 21:45:39.583985  226555 main.go:141] libmachine: (multinode-959371-m02) Calling .GetIP
	I0817 21:45:39.586995  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:39.587407  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:45:39.587432  226555 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:45:39.587676  226555 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 21:45:39.592716  226555 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0817 21:45:39.592783  226555 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371 for IP: 192.168.39.175
	I0817 21:45:39.592846  226555 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:45:39.593076  226555 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 21:45:39.593161  226555 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 21:45:39.593183  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:45:39.593214  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:45:39.593237  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:45:39.593258  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:45:39.593343  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 21:45:39.593397  226555 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 21:45:39.593416  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:45:39.593451  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 21:45:39.593488  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:45:39.593529  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 21:45:39.593603  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:45:39.593644  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:45:39.593668  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem -> /usr/share/ca-certificates/210670.pem
	I0817 21:45:39.593687  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /usr/share/ca-certificates/2106702.pem
	I0817 21:45:39.594785  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:45:39.622289  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:45:39.648878  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:45:39.675320  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:45:39.700087  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:45:39.723450  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 21:45:39.747588  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 21:45:39.771050  226555 ssh_runner.go:195] Run: openssl version
	I0817 21:45:39.776891  226555 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0817 21:45:39.777187  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:45:39.787161  226555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:45:39.791632  226555 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:45:39.791938  226555 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:45:39.791996  226555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:45:39.798396  226555 command_runner.go:130] > b5213941
	I0817 21:45:39.798476  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:45:39.807545  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 21:45:39.818295  226555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 21:45:39.823117  226555 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:45:39.823152  226555 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:45:39.823205  226555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 21:45:39.828653  226555 command_runner.go:130] > 51391683
	I0817 21:45:39.828983  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 21:45:39.837988  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 21:45:39.850565  226555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 21:45:39.855809  226555 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:45:39.855914  226555 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:45:39.855965  226555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 21:45:39.862360  226555 command_runner.go:130] > 3ec20f2e
	I0817 21:45:39.862693  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:45:39.872137  226555 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:45:39.876442  226555 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:45:39.876484  226555 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:45:39.876572  226555 ssh_runner.go:195] Run: crio config
	I0817 21:45:39.925736  226555 command_runner.go:130] ! time="2023-08-17 21:45:39.919346880Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0817 21:45:39.925771  226555 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0817 21:45:39.936468  226555 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0817 21:45:39.936493  226555 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0817 21:45:39.936502  226555 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0817 21:45:39.936508  226555 command_runner.go:130] > #
	I0817 21:45:39.936544  226555 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0817 21:45:39.936558  226555 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0817 21:45:39.936566  226555 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0817 21:45:39.936577  226555 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0817 21:45:39.936591  226555 command_runner.go:130] > # reload'.
	I0817 21:45:39.936605  226555 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0817 21:45:39.936619  226555 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0817 21:45:39.936644  226555 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0817 21:45:39.936656  226555 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0817 21:45:39.936660  226555 command_runner.go:130] > [crio]
	I0817 21:45:39.936666  226555 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0817 21:45:39.936673  226555 command_runner.go:130] > # containers images, in this directory.
	I0817 21:45:39.936684  226555 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0817 21:45:39.936701  226555 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0817 21:45:39.936713  226555 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0817 21:45:39.936726  226555 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0817 21:45:39.936740  226555 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0817 21:45:39.936749  226555 command_runner.go:130] > storage_driver = "overlay"
	I0817 21:45:39.936755  226555 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0817 21:45:39.936767  226555 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0817 21:45:39.936779  226555 command_runner.go:130] > storage_option = [
	I0817 21:45:39.936790  226555 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0817 21:45:39.936798  226555 command_runner.go:130] > ]
	I0817 21:45:39.936817  226555 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0817 21:45:39.936833  226555 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0817 21:45:39.936840  226555 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0817 21:45:39.936849  226555 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0817 21:45:39.936863  226555 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0817 21:45:39.936874  226555 command_runner.go:130] > # always happen on a node reboot
	I0817 21:45:39.936885  226555 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0817 21:45:39.936898  226555 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0817 21:45:39.936911  226555 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0817 21:45:39.936930  226555 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0817 21:45:39.936942  226555 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0817 21:45:39.936958  226555 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0817 21:45:39.936980  226555 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0817 21:45:39.936995  226555 command_runner.go:130] > # internal_wipe = true
	I0817 21:45:39.937003  226555 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0817 21:45:39.937010  226555 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0817 21:45:39.937017  226555 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0817 21:45:39.937026  226555 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0817 21:45:39.937036  226555 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0817 21:45:39.937042  226555 command_runner.go:130] > [crio.api]
	I0817 21:45:39.937051  226555 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0817 21:45:39.937059  226555 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0817 21:45:39.937068  226555 command_runner.go:130] > # IP address on which the stream server will listen.
	I0817 21:45:39.937076  226555 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0817 21:45:39.937089  226555 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0817 21:45:39.937097  226555 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0817 21:45:39.937103  226555 command_runner.go:130] > # stream_port = "0"
	I0817 21:45:39.937115  226555 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0817 21:45:39.937126  226555 command_runner.go:130] > # stream_enable_tls = false
	I0817 21:45:39.937139  226555 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0817 21:45:39.937150  226555 command_runner.go:130] > # stream_idle_timeout = ""
	I0817 21:45:39.937160  226555 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0817 21:45:39.937174  226555 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0817 21:45:39.937182  226555 command_runner.go:130] > # minutes.
	I0817 21:45:39.937189  226555 command_runner.go:130] > # stream_tls_cert = ""
	I0817 21:45:39.937204  226555 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0817 21:45:39.937222  226555 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0817 21:45:39.937232  226555 command_runner.go:130] > # stream_tls_key = ""
	I0817 21:45:39.937243  226555 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0817 21:45:39.937256  226555 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0817 21:45:39.937266  226555 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0817 21:45:39.937274  226555 command_runner.go:130] > # stream_tls_ca = ""
	I0817 21:45:39.937290  226555 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:45:39.937301  226555 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0817 21:45:39.937316  226555 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:45:39.937327  226555 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0817 21:45:39.937360  226555 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0817 21:45:39.937374  226555 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0817 21:45:39.937383  226555 command_runner.go:130] > [crio.runtime]
	I0817 21:45:39.937397  226555 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0817 21:45:39.937409  226555 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0817 21:45:39.937419  226555 command_runner.go:130] > # "nofile=1024:2048"
	I0817 21:45:39.937432  226555 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0817 21:45:39.937438  226555 command_runner.go:130] > # default_ulimits = [
	I0817 21:45:39.937446  226555 command_runner.go:130] > # ]
	I0817 21:45:39.937460  226555 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0817 21:45:39.937472  226555 command_runner.go:130] > # no_pivot = false
	I0817 21:45:39.937484  226555 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0817 21:45:39.937498  226555 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0817 21:45:39.937509  226555 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0817 21:45:39.937519  226555 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0817 21:45:39.937526  226555 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0817 21:45:39.937540  226555 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:45:39.937552  226555 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0817 21:45:39.937563  226555 command_runner.go:130] > # Cgroup setting for conmon
	I0817 21:45:39.937577  226555 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0817 21:45:39.937587  226555 command_runner.go:130] > conmon_cgroup = "pod"
	I0817 21:45:39.937601  226555 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0817 21:45:39.937609  226555 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0817 21:45:39.937621  226555 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:45:39.937636  226555 command_runner.go:130] > conmon_env = [
	I0817 21:45:39.937649  226555 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0817 21:45:39.937661  226555 command_runner.go:130] > ]
	I0817 21:45:39.937674  226555 command_runner.go:130] > # Additional environment variables to set for all the
	I0817 21:45:39.937685  226555 command_runner.go:130] > # containers. These are overridden if set in the
	I0817 21:45:39.937691  226555 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0817 21:45:39.937699  226555 command_runner.go:130] > # default_env = [
	I0817 21:45:39.937708  226555 command_runner.go:130] > # ]
	I0817 21:45:39.937721  226555 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0817 21:45:39.937731  226555 command_runner.go:130] > # selinux = false
	I0817 21:45:39.937745  226555 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0817 21:45:39.937758  226555 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0817 21:45:39.937770  226555 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0817 21:45:39.937778  226555 command_runner.go:130] > # seccomp_profile = ""
	I0817 21:45:39.937784  226555 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0817 21:45:39.937798  226555 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0817 21:45:39.937813  226555 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0817 21:45:39.937823  226555 command_runner.go:130] > # which might increase security.
	I0817 21:45:39.937831  226555 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0817 21:45:39.937845  226555 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0817 21:45:39.937860  226555 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0817 21:45:39.937870  226555 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0817 21:45:39.937884  226555 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0817 21:45:39.937897  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:45:39.937908  226555 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0817 21:45:39.937920  226555 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0817 21:45:39.937931  226555 command_runner.go:130] > # the cgroup blockio controller.
	I0817 21:45:39.937941  226555 command_runner.go:130] > # blockio_config_file = ""
	I0817 21:45:39.937950  226555 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0817 21:45:39.937958  226555 command_runner.go:130] > # irqbalance daemon.
	I0817 21:45:39.937971  226555 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0817 21:45:39.937985  226555 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0817 21:45:39.937997  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:45:39.938007  226555 command_runner.go:130] > # rdt_config_file = ""
	I0817 21:45:39.938016  226555 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0817 21:45:39.938026  226555 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0817 21:45:39.938035  226555 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0817 21:45:39.938044  226555 command_runner.go:130] > # separate_pull_cgroup = ""
	I0817 21:45:39.938072  226555 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0817 21:45:39.938087  226555 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0817 21:45:39.938096  226555 command_runner.go:130] > # will be added.
	I0817 21:45:39.938106  226555 command_runner.go:130] > # default_capabilities = [
	I0817 21:45:39.938115  226555 command_runner.go:130] > # 	"CHOWN",
	I0817 21:45:39.938124  226555 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0817 21:45:39.938131  226555 command_runner.go:130] > # 	"FSETID",
	I0817 21:45:39.938137  226555 command_runner.go:130] > # 	"FOWNER",
	I0817 21:45:39.938147  226555 command_runner.go:130] > # 	"SETGID",
	I0817 21:45:39.938153  226555 command_runner.go:130] > # 	"SETUID",
	I0817 21:45:39.938163  226555 command_runner.go:130] > # 	"SETPCAP",
	I0817 21:45:39.938173  226555 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0817 21:45:39.938183  226555 command_runner.go:130] > # 	"KILL",
	I0817 21:45:39.938191  226555 command_runner.go:130] > # ]
	I0817 21:45:39.938206  226555 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0817 21:45:39.938215  226555 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:45:39.938224  226555 command_runner.go:130] > # default_sysctls = [
	I0817 21:45:39.938233  226555 command_runner.go:130] > # ]
	I0817 21:45:39.938244  226555 command_runner.go:130] > # List of devices on the host that a
	I0817 21:45:39.938258  226555 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0817 21:45:39.938268  226555 command_runner.go:130] > # allowed_devices = [
	I0817 21:45:39.938277  226555 command_runner.go:130] > # 	"/dev/fuse",
	I0817 21:45:39.938286  226555 command_runner.go:130] > # ]
	I0817 21:45:39.938296  226555 command_runner.go:130] > # List of additional devices. specified as
	I0817 21:45:39.938307  226555 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0817 21:45:39.938319  226555 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0817 21:45:39.938367  226555 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:45:39.938377  226555 command_runner.go:130] > # additional_devices = [
	I0817 21:45:39.938383  226555 command_runner.go:130] > # ]
	I0817 21:45:39.938389  226555 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0817 21:45:39.938399  226555 command_runner.go:130] > # cdi_spec_dirs = [
	I0817 21:45:39.938409  226555 command_runner.go:130] > # 	"/etc/cdi",
	I0817 21:45:39.938419  226555 command_runner.go:130] > # 	"/var/run/cdi",
	I0817 21:45:39.938428  226555 command_runner.go:130] > # ]
	I0817 21:45:39.938441  226555 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0817 21:45:39.938459  226555 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0817 21:45:39.938470  226555 command_runner.go:130] > # Defaults to false.
	I0817 21:45:39.938479  226555 command_runner.go:130] > # device_ownership_from_security_context = false
	I0817 21:45:39.938490  226555 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0817 21:45:39.938504  226555 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0817 21:45:39.938514  226555 command_runner.go:130] > # hooks_dir = [
	I0817 21:45:39.938525  226555 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0817 21:45:39.938534  226555 command_runner.go:130] > # ]
	I0817 21:45:39.938547  226555 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0817 21:45:39.938557  226555 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0817 21:45:39.938570  226555 command_runner.go:130] > # its default mounts from the following two files:
	I0817 21:45:39.938579  226555 command_runner.go:130] > #
	I0817 21:45:39.938589  226555 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0817 21:45:39.938604  226555 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0817 21:45:39.938616  226555 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0817 21:45:39.938624  226555 command_runner.go:130] > #
	I0817 21:45:39.938640  226555 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0817 21:45:39.938650  226555 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0817 21:45:39.938665  226555 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0817 21:45:39.938677  226555 command_runner.go:130] > #      only add mounts it finds in this file.
	I0817 21:45:39.938687  226555 command_runner.go:130] > #
	I0817 21:45:39.938694  226555 command_runner.go:130] > # default_mounts_file = ""
	I0817 21:45:39.938706  226555 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0817 21:45:39.938720  226555 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0817 21:45:39.938727  226555 command_runner.go:130] > pids_limit = 1024
	I0817 21:45:39.938735  226555 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0817 21:45:39.938748  226555 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0817 21:45:39.938762  226555 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0817 21:45:39.938780  226555 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0817 21:45:39.938790  226555 command_runner.go:130] > # log_size_max = -1
	I0817 21:45:39.938804  226555 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0817 21:45:39.938812  226555 command_runner.go:130] > # log_to_journald = false
	I0817 21:45:39.938824  226555 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0817 21:45:39.938836  226555 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0817 21:45:39.938848  226555 command_runner.go:130] > # Path to directory for container attach sockets.
	I0817 21:45:39.938859  226555 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0817 21:45:39.938871  226555 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0817 21:45:39.938883  226555 command_runner.go:130] > # bind_mount_prefix = ""
	I0817 21:45:39.938894  226555 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0817 21:45:39.938900  226555 command_runner.go:130] > # read_only = false
	I0817 21:45:39.938910  226555 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0817 21:45:39.938924  226555 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0817 21:45:39.938935  226555 command_runner.go:130] > # live configuration reload.
	I0817 21:45:39.938942  226555 command_runner.go:130] > # log_level = "info"
	I0817 21:45:39.938954  226555 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0817 21:45:39.938966  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:45:39.938976  226555 command_runner.go:130] > # log_filter = ""
	I0817 21:45:39.938985  226555 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0817 21:45:39.938997  226555 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0817 21:45:39.939006  226555 command_runner.go:130] > # separated by comma.
	I0817 21:45:39.939017  226555 command_runner.go:130] > # uid_mappings = ""
	I0817 21:45:39.939030  226555 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0817 21:45:39.939043  226555 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0817 21:45:39.939053  226555 command_runner.go:130] > # separated by comma.
	I0817 21:45:39.939062  226555 command_runner.go:130] > # gid_mappings = ""
	I0817 21:45:39.939072  226555 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0817 21:45:39.939085  226555 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:45:39.939099  226555 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:45:39.939109  226555 command_runner.go:130] > # minimum_mappable_uid = -1
	I0817 21:45:39.939123  226555 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0817 21:45:39.939136  226555 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:45:39.939149  226555 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:45:39.939157  226555 command_runner.go:130] > # minimum_mappable_gid = -1
	I0817 21:45:39.939166  226555 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0817 21:45:39.939181  226555 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0817 21:45:39.939194  226555 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0817 21:45:39.939204  226555 command_runner.go:130] > # ctr_stop_timeout = 30
	I0817 21:45:39.939217  226555 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0817 21:45:39.939230  226555 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0817 21:45:39.939239  226555 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0817 21:45:39.939247  226555 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0817 21:45:39.939259  226555 command_runner.go:130] > drop_infra_ctr = false
	I0817 21:45:39.939274  226555 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0817 21:45:39.939287  226555 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0817 21:45:39.939302  226555 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0817 21:45:39.939312  226555 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0817 21:45:39.939324  226555 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0817 21:45:39.939332  226555 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0817 21:45:39.939340  226555 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0817 21:45:39.939352  226555 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0817 21:45:39.939363  226555 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0817 21:45:39.939377  226555 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0817 21:45:39.939391  226555 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0817 21:45:39.939405  226555 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0817 21:45:39.939413  226555 command_runner.go:130] > # default_runtime = "runc"
	I0817 21:45:39.939419  226555 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0817 21:45:39.939435  226555 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0817 21:45:39.939455  226555 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0817 21:45:39.939466  226555 command_runner.go:130] > # creation as a file is not desired either.
	I0817 21:45:39.939483  226555 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0817 21:45:39.939493  226555 command_runner.go:130] > # the hostname is being managed dynamically.
	I0817 21:45:39.939501  226555 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0817 21:45:39.939510  226555 command_runner.go:130] > # ]
	I0817 21:45:39.939521  226555 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0817 21:45:39.939535  226555 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0817 21:45:39.939550  226555 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0817 21:45:39.939563  226555 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0817 21:45:39.939572  226555 command_runner.go:130] > #
	I0817 21:45:39.939582  226555 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0817 21:45:39.939589  226555 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0817 21:45:39.939600  226555 command_runner.go:130] > #  runtime_type = "oci"
	I0817 21:45:39.939613  226555 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0817 21:45:39.939622  226555 command_runner.go:130] > #  privileged_without_host_devices = false
	I0817 21:45:39.939637  226555 command_runner.go:130] > #  allowed_annotations = []
	I0817 21:45:39.939646  226555 command_runner.go:130] > # Where:
	I0817 21:45:39.939657  226555 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0817 21:45:39.939669  226555 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0817 21:45:39.939681  226555 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0817 21:45:39.939696  226555 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0817 21:45:39.939707  226555 command_runner.go:130] > #   in $PATH.
	I0817 21:45:39.939719  226555 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0817 21:45:39.939731  226555 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0817 21:45:39.939745  226555 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0817 21:45:39.939754  226555 command_runner.go:130] > #   state.
	I0817 21:45:39.939767  226555 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0817 21:45:39.939781  226555 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0817 21:45:39.939795  226555 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0817 21:45:39.939808  226555 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0817 21:45:39.939822  226555 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0817 21:45:39.939836  226555 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0817 21:45:39.939844  226555 command_runner.go:130] > #   The currently recognized values are:
	I0817 21:45:39.939853  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0817 21:45:39.939869  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0817 21:45:39.939883  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0817 21:45:39.939896  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0817 21:45:39.939912  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0817 21:45:39.939925  226555 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0817 21:45:39.939936  226555 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0817 21:45:39.939950  226555 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0817 21:45:39.939962  226555 command_runner.go:130] > #   should be moved to the container's cgroup
	I0817 21:45:39.939972  226555 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0817 21:45:39.939983  226555 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0817 21:45:39.939992  226555 command_runner.go:130] > runtime_type = "oci"
	I0817 21:45:39.940004  226555 command_runner.go:130] > runtime_root = "/run/runc"
	I0817 21:45:39.940013  226555 command_runner.go:130] > runtime_config_path = ""
	I0817 21:45:39.940020  226555 command_runner.go:130] > monitor_path = ""
	I0817 21:45:39.940026  226555 command_runner.go:130] > monitor_cgroup = ""
	I0817 21:45:39.940036  226555 command_runner.go:130] > monitor_exec_cgroup = ""
	I0817 21:45:39.940051  226555 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0817 21:45:39.940061  226555 command_runner.go:130] > # running containers
	I0817 21:45:39.940071  226555 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0817 21:45:39.940085  226555 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0817 21:45:39.940129  226555 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0817 21:45:39.940143  226555 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0817 21:45:39.940155  226555 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0817 21:45:39.940166  226555 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0817 21:45:39.940177  226555 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0817 21:45:39.940188  226555 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0817 21:45:39.940196  226555 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0817 21:45:39.940207  226555 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0817 21:45:39.940222  226555 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0817 21:45:39.940234  226555 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0817 21:45:39.940247  226555 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0817 21:45:39.940263  226555 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0817 21:45:39.940275  226555 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0817 21:45:39.940287  226555 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0817 21:45:39.940307  226555 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0817 21:45:39.940324  226555 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0817 21:45:39.940337  226555 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0817 21:45:39.940353  226555 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0817 21:45:39.940360  226555 command_runner.go:130] > # Example:
	I0817 21:45:39.940365  226555 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0817 21:45:39.940376  226555 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0817 21:45:39.940389  226555 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0817 21:45:39.940401  226555 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0817 21:45:39.940410  226555 command_runner.go:130] > # cpuset = 0
	I0817 21:45:39.940421  226555 command_runner.go:130] > # cpushares = "0-1"
	I0817 21:45:39.940430  226555 command_runner.go:130] > # Where:
	I0817 21:45:39.940438  226555 command_runner.go:130] > # The workload name is workload-type.
	I0817 21:45:39.940448  226555 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0817 21:45:39.940460  226555 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0817 21:45:39.940474  226555 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0817 21:45:39.940490  226555 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0817 21:45:39.940503  226555 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0817 21:45:39.940512  226555 command_runner.go:130] > # 
	I0817 21:45:39.940525  226555 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0817 21:45:39.940531  226555 command_runner.go:130] > #
	I0817 21:45:39.940538  226555 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0817 21:45:39.940552  226555 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0817 21:45:39.940566  226555 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0817 21:45:39.940580  226555 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0817 21:45:39.940595  226555 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0817 21:45:39.940605  226555 command_runner.go:130] > [crio.image]
	I0817 21:45:39.940614  226555 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0817 21:45:39.940619  226555 command_runner.go:130] > # default_transport = "docker://"
	I0817 21:45:39.940638  226555 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0817 21:45:39.940653  226555 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:45:39.940663  226555 command_runner.go:130] > # global_auth_file = ""
	I0817 21:45:39.940674  226555 command_runner.go:130] > # The image used to instantiate infra containers.
	I0817 21:45:39.940686  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:45:39.940696  226555 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0817 21:45:39.940705  226555 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0817 21:45:39.940711  226555 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:45:39.940718  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:45:39.940722  226555 command_runner.go:130] > # pause_image_auth_file = ""
	I0817 21:45:39.940734  226555 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0817 21:45:39.940749  226555 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0817 21:45:39.940763  226555 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0817 21:45:39.940776  226555 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0817 21:45:39.940786  226555 command_runner.go:130] > # pause_command = "/pause"
	I0817 21:45:39.940799  226555 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0817 21:45:39.940809  226555 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0817 21:45:39.940817  226555 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0817 21:45:39.940826  226555 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0817 21:45:39.940833  226555 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0817 21:45:39.940838  226555 command_runner.go:130] > # signature_policy = ""
	I0817 21:45:39.940844  226555 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0817 21:45:39.940852  226555 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0817 21:45:39.940858  226555 command_runner.go:130] > # changing them here.
	I0817 21:45:39.940862  226555 command_runner.go:130] > # insecure_registries = [
	I0817 21:45:39.940868  226555 command_runner.go:130] > # ]
	I0817 21:45:39.940882  226555 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0817 21:45:39.940895  226555 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0817 21:45:39.940905  226555 command_runner.go:130] > # image_volumes = "mkdir"
	I0817 21:45:39.940917  226555 command_runner.go:130] > # Temporary directory to use for storing big files
	I0817 21:45:39.940927  226555 command_runner.go:130] > # big_files_temporary_dir = ""
	I0817 21:45:39.940940  226555 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0817 21:45:39.940949  226555 command_runner.go:130] > # CNI plugins.
	I0817 21:45:39.940954  226555 command_runner.go:130] > [crio.network]
	I0817 21:45:39.940963  226555 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0817 21:45:39.940970  226555 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0817 21:45:39.940977  226555 command_runner.go:130] > # cni_default_network = ""
	I0817 21:45:39.940983  226555 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0817 21:45:39.940989  226555 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0817 21:45:39.940995  226555 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0817 21:45:39.941001  226555 command_runner.go:130] > # plugin_dirs = [
	I0817 21:45:39.941005  226555 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0817 21:45:39.941011  226555 command_runner.go:130] > # ]
	I0817 21:45:39.941017  226555 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0817 21:45:39.941023  226555 command_runner.go:130] > [crio.metrics]
	I0817 21:45:39.941028  226555 command_runner.go:130] > # Globally enable or disable metrics support.
	I0817 21:45:39.941034  226555 command_runner.go:130] > enable_metrics = true
	I0817 21:45:39.941038  226555 command_runner.go:130] > # Specify enabled metrics collectors.
	I0817 21:45:39.941043  226555 command_runner.go:130] > # Per default all metrics are enabled.
	I0817 21:45:39.941051  226555 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0817 21:45:39.941059  226555 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0817 21:45:39.941067  226555 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0817 21:45:39.941074  226555 command_runner.go:130] > # metrics_collectors = [
	I0817 21:45:39.941078  226555 command_runner.go:130] > # 	"operations",
	I0817 21:45:39.941085  226555 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0817 21:45:39.941092  226555 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0817 21:45:39.941096  226555 command_runner.go:130] > # 	"operations_errors",
	I0817 21:45:39.941104  226555 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0817 21:45:39.941115  226555 command_runner.go:130] > # 	"image_pulls_by_name",
	I0817 21:45:39.941126  226555 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0817 21:45:39.941131  226555 command_runner.go:130] > # 	"image_pulls_failures",
	I0817 21:45:39.941138  226555 command_runner.go:130] > # 	"image_pulls_successes",
	I0817 21:45:39.941142  226555 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0817 21:45:39.941148  226555 command_runner.go:130] > # 	"image_layer_reuse",
	I0817 21:45:39.941152  226555 command_runner.go:130] > # 	"containers_oom_total",
	I0817 21:45:39.941158  226555 command_runner.go:130] > # 	"containers_oom",
	I0817 21:45:39.941162  226555 command_runner.go:130] > # 	"processes_defunct",
	I0817 21:45:39.941169  226555 command_runner.go:130] > # 	"operations_total",
	I0817 21:45:39.941174  226555 command_runner.go:130] > # 	"operations_latency_seconds",
	I0817 21:45:39.941182  226555 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0817 21:45:39.941186  226555 command_runner.go:130] > # 	"operations_errors_total",
	I0817 21:45:39.941192  226555 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0817 21:45:39.941197  226555 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0817 21:45:39.941203  226555 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0817 21:45:39.941208  226555 command_runner.go:130] > # 	"image_pulls_success_total",
	I0817 21:45:39.941214  226555 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0817 21:45:39.941218  226555 command_runner.go:130] > # 	"containers_oom_count_total",
	I0817 21:45:39.941224  226555 command_runner.go:130] > # ]
	I0817 21:45:39.941229  226555 command_runner.go:130] > # The port on which the metrics server will listen.
	I0817 21:45:39.941235  226555 command_runner.go:130] > # metrics_port = 9090
	I0817 21:45:39.941240  226555 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0817 21:45:39.941246  226555 command_runner.go:130] > # metrics_socket = ""
	I0817 21:45:39.941252  226555 command_runner.go:130] > # The certificate for the secure metrics server.
	I0817 21:45:39.941260  226555 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0817 21:45:39.941268  226555 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0817 21:45:39.941275  226555 command_runner.go:130] > # certificate on any modification event.
	I0817 21:45:39.941281  226555 command_runner.go:130] > # metrics_cert = ""
	I0817 21:45:39.941288  226555 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0817 21:45:39.941293  226555 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0817 21:45:39.941297  226555 command_runner.go:130] > # metrics_key = ""
	I0817 21:45:39.941303  226555 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0817 21:45:39.941309  226555 command_runner.go:130] > [crio.tracing]
	I0817 21:45:39.941314  226555 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0817 21:45:39.941321  226555 command_runner.go:130] > # enable_tracing = false
	I0817 21:45:39.941326  226555 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0817 21:45:39.941333  226555 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0817 21:45:39.941338  226555 command_runner.go:130] > # Number of samples to collect per million spans.
	I0817 21:45:39.941345  226555 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0817 21:45:39.941351  226555 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0817 21:45:39.941357  226555 command_runner.go:130] > [crio.stats]
	I0817 21:45:39.941363  226555 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0817 21:45:39.941371  226555 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0817 21:45:39.941377  226555 command_runner.go:130] > # stats_collection_period = 0
	I0817 21:45:39.941446  226555 cni.go:84] Creating CNI manager for ""
	I0817 21:45:39.941456  226555 cni.go:136] 3 nodes found, recommending kindnet
	I0817 21:45:39.941478  226555 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:45:39.941505  226555 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.175 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-959371 NodeName:multinode-959371-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:45:39.941644  226555 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-959371-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:45:39.941700  226555 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-959371-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:45:39.941753  226555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:45:39.952620  226555 command_runner.go:130] > kubeadm
	I0817 21:45:39.952658  226555 command_runner.go:130] > kubectl
	I0817 21:45:39.952664  226555 command_runner.go:130] > kubelet
	I0817 21:45:39.953123  226555 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:45:39.953194  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0817 21:45:39.963072  226555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0817 21:45:39.982080  226555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:45:40.001051  226555 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0817 21:45:40.004887  226555 command_runner.go:130] > 192.168.39.104	control-plane.minikube.internal
	I0817 21:45:40.005148  226555 host.go:66] Checking if "multinode-959371" exists ...
	I0817 21:45:40.005506  226555 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:45:40.005514  226555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:45:40.005580  226555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:45:40.020782  226555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
	I0817 21:45:40.021196  226555 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:45:40.021691  226555 main.go:141] libmachine: Using API Version  1
	I0817 21:45:40.021713  226555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:45:40.022031  226555 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:45:40.022235  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:45:40.022375  226555 start.go:301] JoinCluster: &{Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false
istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0}
	I0817 21:45:40.022499  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0817 21:45:40.022517  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:45:40.025395  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:45:40.025759  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:45:40.025786  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:45:40.025861  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:45:40.026044  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:45:40.026251  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:45:40.026383  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:45:40.200639  226555 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xmer7v.8niv7r2qgxyl3pc3 --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 21:45:40.205221  226555 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:45:40.205286  226555 host.go:66] Checking if "multinode-959371" exists ...
	I0817 21:45:40.205777  226555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:45:40.205844  226555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:45:40.221150  226555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46305
	I0817 21:45:40.221608  226555 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:45:40.222092  226555 main.go:141] libmachine: Using API Version  1
	I0817 21:45:40.222116  226555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:45:40.222463  226555 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:45:40.222655  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:45:40.222859  226555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl drain multinode-959371-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0817 21:45:40.222880  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:45:40.225914  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:45:40.226446  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:45:40.226482  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:45:40.226724  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:45:40.226912  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:45:40.227090  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:45:40.227265  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:45:40.388955  226555 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0817 21:45:40.461735  226555 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-xjn26, kube-system/kube-proxy-zmldj
	I0817 21:45:43.484934  226555 command_runner.go:130] > node/multinode-959371-m02 cordoned
	I0817 21:45:43.484967  226555 command_runner.go:130] > pod "busybox-67b7f59bb-65x2b" has DeletionTimestamp older than 1 seconds, skipping
	I0817 21:45:43.484980  226555 command_runner.go:130] > node/multinode-959371-m02 drained
	I0817 21:45:43.485006  226555 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl drain multinode-959371-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.262122024s)
	I0817 21:45:43.485024  226555 node.go:108] successfully drained node "m02"
	I0817 21:45:43.485407  226555 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:45:43.485689  226555 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:45:43.486091  226555 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0817 21:45:43.486155  226555 round_trippers.go:463] DELETE https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:45:43.486166  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:43.486178  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:43.486191  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:43.486205  226555 round_trippers.go:473]     Content-Type: application/json
	I0817 21:45:43.500536  226555 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0817 21:45:43.500571  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:43.500583  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:43.500592  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:43.500600  226555 round_trippers.go:580]     Content-Length: 171
	I0817 21:45:43.500608  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:43 GMT
	I0817 21:45:43.500616  226555 round_trippers.go:580]     Audit-Id: 23b0a19a-88e3-4ef6-b760-434a1183206a
	I0817 21:45:43.500625  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:43.500633  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:43.500673  226555 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-959371-m02","kind":"nodes","uid":"c81a39b9-ca66-4685-8303-3788a4649c9f"}}
	I0817 21:45:43.500786  226555 node.go:124] successfully deleted node "m02"
	I0817 21:45:43.500807  226555 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:45:43.500838  226555 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:45:43.500866  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xmer7v.8niv7r2qgxyl3pc3 --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-959371-m02"
	I0817 21:45:43.558864  226555 command_runner.go:130] > [preflight] Running pre-flight checks
	I0817 21:45:43.713696  226555 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0817 21:45:43.713735  226555 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0817 21:45:43.781044  226555 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:45:43.781068  226555 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:45:43.781074  226555 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0817 21:45:43.941754  226555 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0817 21:45:44.468827  226555 command_runner.go:130] > This node has joined the cluster:
	I0817 21:45:44.468859  226555 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0817 21:45:44.468869  226555 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0817 21:45:44.468878  226555 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0817 21:45:44.471553  226555 command_runner.go:130] ! W0817 21:45:43.552257    2574 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0817 21:45:44.471587  226555 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0817 21:45:44.471597  226555 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0817 21:45:44.471613  226555 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0817 21:45:44.471674  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0817 21:45:44.738574  226555 start.go:303] JoinCluster complete in 4.716194976s
	I0817 21:45:44.738605  226555 cni.go:84] Creating CNI manager for ""
	I0817 21:45:44.738611  226555 cni.go:136] 3 nodes found, recommending kindnet
	I0817 21:45:44.738673  226555 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:45:44.745122  226555 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0817 21:45:44.745152  226555 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0817 21:45:44.745159  226555 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0817 21:45:44.745169  226555 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:45:44.745178  226555 command_runner.go:130] > Access: 2023-08-17 21:43:18.834579600 +0000
	I0817 21:45:44.745185  226555 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0817 21:45:44.745193  226555 command_runner.go:130] > Change: 2023-08-17 21:43:16.782579600 +0000
	I0817 21:45:44.745199  226555 command_runner.go:130] >  Birth: -
	I0817 21:45:44.745264  226555 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:45:44.745275  226555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:45:44.769659  226555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:45:45.281534  226555 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:45:45.293611  226555 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:45:45.297178  226555 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0817 21:45:45.309603  226555 command_runner.go:130] > daemonset.apps/kindnet configured
	I0817 21:45:45.312886  226555 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:45:45.313118  226555 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:45:45.313522  226555 round_trippers.go:463] GET https://192.168.39.104:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:45:45.313537  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.313545  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.313551  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.317816  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:45:45.317840  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.317848  226555 round_trippers.go:580]     Audit-Id: ea49cfc9-78c3-4817-84e4-bca1dd805fd7
	I0817 21:45:45.317854  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.317859  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.317865  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.317871  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.317883  226555 round_trippers.go:580]     Content-Length: 291
	I0817 21:45:45.317898  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.317925  226555 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15a0cefe-1964-4cfb-951e-96eec4cbbba6","resourceVersion":"885","creationTimestamp":"2023-08-17T21:33:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0817 21:45:45.318079  226555 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-959371" context rescaled to 1 replicas
	I0817 21:45:45.318118  226555 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:45:45.321389  226555 out.go:177] * Verifying Kubernetes components...
	I0817 21:45:45.323196  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:45:45.337441  226555 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:45:45.337702  226555 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:45:45.338008  226555 node_ready.go:35] waiting up to 6m0s for node "multinode-959371-m02" to be "Ready" ...
	I0817 21:45:45.338104  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:45:45.338115  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.338127  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.338141  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.342082  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:45:45.342104  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.342112  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.342117  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.342123  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.342128  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.342134  226555 round_trippers.go:580]     Audit-Id: 514544c6-33ee-4162-be4d-aae8993f4bc9
	I0817 21:45:45.342142  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.342971  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"678576f5-add2-4ca6-91e2-ac74cb1639ff","resourceVersion":"1030","creationTimestamp":"2023-08-17T21:45:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:45:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:45:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0817 21:45:45.343254  226555 node_ready.go:49] node "multinode-959371-m02" has status "Ready":"True"
	I0817 21:45:45.343270  226555 node_ready.go:38] duration metric: took 5.244029ms waiting for node "multinode-959371-m02" to be "Ready" ...
	I0817 21:45:45.343283  226555 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:45:45.343347  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:45:45.343357  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.343368  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.343379  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.350519  226555 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0817 21:45:45.350545  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.350553  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.350560  226555 round_trippers.go:580]     Audit-Id: 63ee7a58-6bb0-4a6e-8ff9-8a16111e79b0
	I0817 21:45:45.350565  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.350570  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.350576  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.350582  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.352663  226555 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1037"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"872","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82249 chars]
	I0817 21:45:45.355986  226555 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.356102  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:45:45.356113  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.356126  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.356140  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.361339  226555 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0817 21:45:45.361370  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.361381  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.361390  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.361411  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.361433  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.361441  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.361450  226555 round_trippers.go:580]     Audit-Id: a0d8f6b1-7f19-4a85-93a3-16ba2db06fcb
	I0817 21:45:45.361581  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"872","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0817 21:45:45.362236  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:45:45.362256  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.362266  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.362276  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.367456  226555 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0817 21:45:45.367480  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.367487  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.367493  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.367498  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.367503  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.367509  226555 round_trippers.go:580]     Audit-Id: 82cdc93d-c1cb-4f82-b820-5facaa77a0fb
	I0817 21:45:45.367516  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.367729  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:45:45.368232  226555 pod_ready.go:92] pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace has status "Ready":"True"
	I0817 21:45:45.368257  226555 pod_ready.go:81] duration metric: took 12.240063ms waiting for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.368273  226555 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.368433  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-959371
	I0817 21:45:45.368454  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.368464  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.368470  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.371486  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:45:45.371512  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.371522  226555 round_trippers.go:580]     Audit-Id: e3249a2b-ba74-4c61-ba77-a405a36a245c
	I0817 21:45:45.371532  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.371543  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.371552  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.371560  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.371573  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.371835  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-959371","namespace":"kube-system","uid":"0ffe6db5-4285-4788-88b2-073753ece5f3","resourceVersion":"866","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.104:2379","kubernetes.io/config.hash":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.mirror":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.seen":"2023-08-17T21:33:26.519088298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0817 21:45:45.372344  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:45:45.372364  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.372376  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.372385  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.375423  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:45:45.375448  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.375459  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.375467  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.375476  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.375487  226555 round_trippers.go:580]     Audit-Id: ab94941e-fc5e-413c-83c5-92c1640b2ec1
	I0817 21:45:45.375499  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.375508  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.375694  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:45:45.376159  226555 pod_ready.go:92] pod "etcd-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:45:45.376187  226555 pod_ready.go:81] duration metric: took 7.90571ms waiting for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.376213  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.376296  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-959371
	I0817 21:45:45.376310  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.376322  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.376332  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.378865  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:45:45.378889  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.378899  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.378907  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.378916  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.378925  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.378933  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.378942  226555 round_trippers.go:580]     Audit-Id: 2498f833-be8a-4846-9d3e-18332f1afe2d
	I0817 21:45:45.379194  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-959371","namespace":"kube-system","uid":"0efb1ae7-705a-47df-91c6-0d9390b68983","resourceVersion":"863","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.104:8443","kubernetes.io/config.hash":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.mirror":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.seen":"2023-08-17T21:33:26.519082064Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0817 21:45:45.379852  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:45:45.379876  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.379888  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.379899  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.384263  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:45:45.384282  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.384290  226555 round_trippers.go:580]     Audit-Id: 20ff56d1-5bd7-41ad-b11d-f346c6b8773f
	I0817 21:45:45.384295  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.384301  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.384306  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.384312  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.384320  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.384753  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:45:45.385076  226555 pod_ready.go:92] pod "kube-apiserver-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:45:45.385090  226555 pod_ready.go:81] duration metric: took 8.863862ms waiting for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.385100  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.385153  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:45:45.385161  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.385169  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.385175  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.387884  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:45:45.387910  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.387920  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.387928  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.387937  226555 round_trippers.go:580]     Audit-Id: 58ec1fb8-a30b-4837-ab0b-f3e1a6c0250c
	I0817 21:45:45.387945  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.387956  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.387968  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.388123  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"892","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0817 21:45:45.388696  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:45:45.388717  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.388729  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.388738  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.394253  226555 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0817 21:45:45.394277  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.394286  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.394295  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.394304  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.394311  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.394321  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.394334  226555 round_trippers.go:580]     Audit-Id: 5589ecdc-3fdd-4bf7-b530-e01c9f7e88e3
	I0817 21:45:45.394481  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:45:45.394842  226555 pod_ready.go:92] pod "kube-controller-manager-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:45:45.394858  226555 pod_ready.go:81] duration metric: took 9.751731ms waiting for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.394873  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.538222  226555 request.go:628] Waited for 143.256325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:45:45.538287  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:45:45.538292  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.538301  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.538308  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.541528  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:45:45.541560  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.541570  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.541578  226555 round_trippers.go:580]     Audit-Id: 4abd7847-775e-48e5-8279-795311d2ffb9
	I0817 21:45:45.541585  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.541593  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.541601  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.541608  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.541873  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gdf7","generateName":"kube-proxy-","namespace":"kube-system","uid":"00e6f433-51d6-49bb-a927-780720361eb3","resourceVersion":"831","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0817 21:45:45.738799  226555 request.go:628] Waited for 196.408332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:45:45.738866  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:45:45.738871  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.738885  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.738894  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.742491  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:45:45.742519  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.742529  226555 round_trippers.go:580]     Audit-Id: 81f25022-ff55-4613-abda-d5fea02ff0b5
	I0817 21:45:45.742537  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.742546  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.742553  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.742561  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.742573  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.742886  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:45:45.743260  226555 pod_ready.go:92] pod "kube-proxy-8gdf7" in "kube-system" namespace has status "Ready":"True"
	I0817 21:45:45.743281  226555 pod_ready.go:81] duration metric: took 348.400845ms waiting for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.743294  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g94gj" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:45.938692  226555 request.go:628] Waited for 195.32131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g94gj
	I0817 21:45:45.938790  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g94gj
	I0817 21:45:45.938801  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:45.938811  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:45.938824  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:45.943042  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:45:45.943074  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:45.943086  226555 round_trippers.go:580]     Audit-Id: 5fdf65dc-f35d-4496-a922-a04c1de958a9
	I0817 21:45:45.943092  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:45.943099  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:45.943107  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:45.943115  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:45.943122  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:45 GMT
	I0817 21:45:45.943273  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g94gj","generateName":"kube-proxy-","namespace":"kube-system","uid":"050b1eab-a69f-4f6f-b3b8-f29ef38c9042","resourceVersion":"719","creationTimestamp":"2023-08-17T21:35:12Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:35:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0817 21:45:46.138185  226555 request.go:628] Waited for 194.292161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:45:46.138271  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:45:46.138278  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:46.138292  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:46.138302  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:46.141360  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:45:46.141393  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:46.141404  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:46.141415  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:46 GMT
	I0817 21:45:46.141423  226555 round_trippers.go:580]     Audit-Id: 1174a9f8-e8ec-4367-8717-bb3ba28a245c
	I0817 21:45:46.141431  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:46.141438  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:46.141445  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:46.141661  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m03","uid":"31bc1a59-dff0-4542-804e-a9c019ecd2f4","resourceVersion":"889","creationTimestamp":"2023-08-17T21:35:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:35:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0817 21:45:46.142081  226555 pod_ready.go:92] pod "kube-proxy-g94gj" in "kube-system" namespace has status "Ready":"True"
	I0817 21:45:46.142106  226555 pod_ready.go:81] duration metric: took 398.802266ms waiting for pod "kube-proxy-g94gj" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:46.142124  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:46.338617  226555 request.go:628] Waited for 196.389913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:45:46.338684  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:45:46.338690  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:46.338707  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:46.338716  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:46.341797  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:45:46.341831  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:46.341842  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:46.341850  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:46.341859  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:46.341867  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:46.341876  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:46 GMT
	I0817 21:45:46.341885  226555 round_trippers.go:580]     Audit-Id: ea0b39de-f59a-42c9-9a37-fd9210d9f22a
	I0817 21:45:46.342029  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zmldj","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac59040d-df0c-416f-9660-4a41f7b75789","resourceVersion":"1045","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0817 21:45:46.538908  226555 request.go:628] Waited for 196.247251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:45:46.538987  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:45:46.538994  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:46.539004  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:46.539014  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:46.542991  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:45:46.543017  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:46.543025  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:46.543031  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:46.543036  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:46.543041  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:46.543047  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:46 GMT
	I0817 21:45:46.543052  226555 round_trippers.go:580]     Audit-Id: 5ee93fc9-ab37-48db-93fe-7d6f0bae7a5f
	I0817 21:45:46.543196  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"678576f5-add2-4ca6-91e2-ac74cb1639ff","resourceVersion":"1030","creationTimestamp":"2023-08-17T21:45:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:45:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:45:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0817 21:45:46.543520  226555 pod_ready.go:92] pod "kube-proxy-zmldj" in "kube-system" namespace has status "Ready":"True"
	I0817 21:45:46.543539  226555 pod_ready.go:81] duration metric: took 401.403904ms waiting for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:46.543555  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:46.739047  226555 request.go:628] Waited for 195.412778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:45:46.739122  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:45:46.739133  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:46.739145  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:46.739177  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:46.742528  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:45:46.742551  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:46.742562  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:46 GMT
	I0817 21:45:46.742571  226555 round_trippers.go:580]     Audit-Id: 59678ad8-11d0-471e-8983-10dd2224fe26
	I0817 21:45:46.742580  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:46.742589  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:46.742597  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:46.742606  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:46.742760  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-959371","namespace":"kube-system","uid":"a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2","resourceVersion":"882","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.mirror":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.seen":"2023-08-17T21:33:26.519087461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0817 21:45:46.938674  226555 request.go:628] Waited for 195.383325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:45:46.938779  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:45:46.938791  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:46.938805  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:46.938818  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:46.941619  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:45:46.941646  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:46.941656  226555 round_trippers.go:580]     Audit-Id: 02b6f234-2553-4e7e-99ba-b7814fe7cd70
	I0817 21:45:46.941664  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:46.941671  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:46.941679  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:46.941687  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:46.941695  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:46 GMT
	I0817 21:45:46.941852  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:45:46.942336  226555 pod_ready.go:92] pod "kube-scheduler-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:45:46.942361  226555 pod_ready.go:81] duration metric: took 398.79641ms waiting for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:45:46.942381  226555 pod_ready.go:38] duration metric: took 1.599086922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:45:46.942421  226555 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:45:46.942487  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:45:46.957192  226555 system_svc.go:56] duration metric: took 14.754731ms WaitForService to wait for kubelet.
	I0817 21:45:46.957229  226555 kubeadm.go:581] duration metric: took 1.639078301s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:45:46.957255  226555 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:45:47.138766  226555 request.go:628] Waited for 181.395899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I0817 21:45:47.138839  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I0817 21:45:47.138847  226555 round_trippers.go:469] Request Headers:
	I0817 21:45:47.138859  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:45:47.138869  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:45:47.142232  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:45:47.142263  226555 round_trippers.go:577] Response Headers:
	I0817 21:45:47.142274  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:45:47.142283  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:45:47 GMT
	I0817 21:45:47.142292  226555 round_trippers.go:580]     Audit-Id: 0ea664d6-765b-45e4-9457-ff09fc76c65b
	I0817 21:45:47.142301  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:45:47.142310  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:45:47.142319  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:45:47.142534  226555 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1048"},"items":[{"metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15106 chars]
	I0817 21:45:47.143197  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:45:47.143239  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:45:47.143250  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:45:47.143254  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:45:47.143259  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:45:47.143263  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:45:47.143267  226555 node_conditions.go:105] duration metric: took 186.006819ms to run NodePressure ...
	I0817 21:45:47.143281  226555 start.go:228] waiting for startup goroutines ...
	I0817 21:45:47.143306  226555 start.go:242] writing updated cluster config ...
	I0817 21:45:47.143791  226555 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:45:47.143881  226555 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:45:47.147044  226555 out.go:177] * Starting worker node multinode-959371-m03 in cluster multinode-959371
	I0817 21:45:47.148630  226555 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:45:47.148675  226555 cache.go:57] Caching tarball of preloaded images
	I0817 21:45:47.148807  226555 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 21:45:47.148821  226555 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:45:47.148951  226555 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/config.json ...
	I0817 21:45:47.149125  226555 start.go:365] acquiring machines lock for multinode-959371-m03: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 21:45:47.149177  226555 start.go:369] acquired machines lock for "multinode-959371-m03" in 25.953µs
	I0817 21:45:47.149192  226555 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:45:47.149198  226555 fix.go:54] fixHost starting: m03
	I0817 21:45:47.149471  226555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:45:47.149510  226555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:45:47.165349  226555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44965
	I0817 21:45:47.165929  226555 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:45:47.166559  226555 main.go:141] libmachine: Using API Version  1
	I0817 21:45:47.166585  226555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:45:47.166970  226555 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:45:47.167145  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .DriverName
	I0817 21:45:47.167306  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetState
	I0817 21:45:47.169133  226555 fix.go:102] recreateIfNeeded on multinode-959371-m03: state=Running err=<nil>
	W0817 21:45:47.169159  226555 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:45:47.171400  226555 out.go:177] * Updating the running kvm2 "multinode-959371-m03" VM ...
	I0817 21:45:47.173066  226555 machine.go:88] provisioning docker machine ...
	I0817 21:45:47.173099  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .DriverName
	I0817 21:45:47.173393  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetMachineName
	I0817 21:45:47.173562  226555 buildroot.go:166] provisioning hostname "multinode-959371-m03"
	I0817 21:45:47.173589  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetMachineName
	I0817 21:45:47.173789  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHHostname
	I0817 21:45:47.176399  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.176849  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:45:47.176882  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.177065  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHPort
	I0817 21:45:47.177196  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:45:47.177378  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:45:47.177569  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHUsername
	I0817 21:45:47.177788  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:45:47.178403  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0817 21:45:47.178425  226555 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-959371-m03 && echo "multinode-959371-m03" | sudo tee /etc/hostname
	I0817 21:45:47.320605  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-959371-m03
	
	I0817 21:45:47.320647  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHHostname
	I0817 21:45:47.323598  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.323928  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:45:47.323957  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.324123  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHPort
	I0817 21:45:47.324341  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:45:47.324576  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:45:47.324775  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHUsername
	I0817 21:45:47.324970  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:45:47.325427  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0817 21:45:47.325475  226555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-959371-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-959371-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-959371-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:45:47.455394  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:45:47.455428  226555 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 21:45:47.455456  226555 buildroot.go:174] setting up certificates
	I0817 21:45:47.455470  226555 provision.go:83] configureAuth start
	I0817 21:45:47.455483  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetMachineName
	I0817 21:45:47.455916  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetIP
	I0817 21:45:47.458810  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.459101  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:45:47.459137  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.459324  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHHostname
	I0817 21:45:47.461395  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.461672  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:45:47.461693  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.461831  226555 provision.go:138] copyHostCerts
	I0817 21:45:47.461865  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:45:47.461905  226555 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 21:45:47.461918  226555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 21:45:47.462002  226555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 21:45:47.462169  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:45:47.462201  226555 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 21:45:47.462212  226555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 21:45:47.462254  226555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 21:45:47.462320  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:45:47.462346  226555 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 21:45:47.462355  226555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 21:45:47.462388  226555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 21:45:47.462472  226555 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.multinode-959371-m03 san=[192.168.39.227 192.168.39.227 localhost 127.0.0.1 minikube multinode-959371-m03]
	I0817 21:45:47.749573  226555 provision.go:172] copyRemoteCerts
	I0817 21:45:47.749642  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:45:47.749687  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHHostname
	I0817 21:45:47.752783  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.753226  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:45:47.753264  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.753457  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHPort
	I0817 21:45:47.753692  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:45:47.753886  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHUsername
	I0817 21:45:47.754035  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m03/id_rsa Username:docker}
	I0817 21:45:47.851929  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:45:47.852004  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 21:45:47.876017  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:45:47.876086  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0817 21:45:47.899732  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:45:47.899805  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:45:47.924632  226555 provision.go:86] duration metric: configureAuth took 469.144435ms
	I0817 21:45:47.924665  226555 buildroot.go:189] setting minikube options for container-runtime
	I0817 21:45:47.924893  226555 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:45:47.925002  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHHostname
	I0817 21:45:47.927784  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.928207  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:45:47.928241  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:45:47.928373  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHPort
	I0817 21:45:47.928614  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:45:47.928772  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:45:47.928891  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHUsername
	I0817 21:45:47.929033  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:45:47.929438  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0817 21:45:47.929455  226555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:47:18.568392  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:47:18.568460  226555 machine.go:91] provisioned docker machine in 1m31.395373359s
	I0817 21:47:18.568499  226555 start.go:300] post-start starting for "multinode-959371-m03" (driver="kvm2")
	I0817 21:47:18.568535  226555 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:47:18.568562  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .DriverName
	I0817 21:47:18.568995  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:47:18.569049  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHHostname
	I0817 21:47:18.572125  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.572511  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:47:18.572546  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.572795  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHPort
	I0817 21:47:18.573001  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:47:18.573179  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHUsername
	I0817 21:47:18.573369  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m03/id_rsa Username:docker}
	I0817 21:47:18.670498  226555 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:47:18.675890  226555 command_runner.go:130] > NAME=Buildroot
	I0817 21:47:18.675916  226555 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0817 21:47:18.675921  226555 command_runner.go:130] > ID=buildroot
	I0817 21:47:18.675927  226555 command_runner.go:130] > VERSION_ID=2021.02.12
	I0817 21:47:18.675932  226555 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0817 21:47:18.675991  226555 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 21:47:18.676015  226555 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 21:47:18.676151  226555 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 21:47:18.676224  226555 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 21:47:18.676235  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /etc/ssl/certs/2106702.pem
	I0817 21:47:18.676336  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:47:18.686916  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:47:18.711562  226555 start.go:303] post-start completed in 143.04176ms
	I0817 21:47:18.711594  226555 fix.go:56] fixHost completed within 1m31.562395555s
	I0817 21:47:18.711618  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHHostname
	I0817 21:47:18.714696  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.715178  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:47:18.715226  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.715380  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHPort
	I0817 21:47:18.715595  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:47:18.715792  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:47:18.715936  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHUsername
	I0817 21:47:18.716119  226555 main.go:141] libmachine: Using SSH client type: native
	I0817 21:47:18.716534  226555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0817 21:47:18.716547  226555 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 21:47:18.847623  226555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692308838.842209888
	
	I0817 21:47:18.847663  226555 fix.go:206] guest clock: 1692308838.842209888
	I0817 21:47:18.847684  226555 fix.go:219] Guest: 2023-08-17 21:47:18.842209888 +0000 UTC Remote: 2023-08-17 21:47:18.711598349 +0000 UTC m=+551.157354849 (delta=130.611539ms)
	I0817 21:47:18.847702  226555 fix.go:190] guest clock delta is within tolerance: 130.611539ms
	I0817 21:47:18.847707  226555 start.go:83] releasing machines lock for "multinode-959371-m03", held for 1m31.698519891s
	I0817 21:47:18.847758  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .DriverName
	I0817 21:47:18.848076  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetIP
	I0817 21:47:18.850915  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.851392  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:47:18.851448  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.853742  226555 out.go:177] * Found network options:
	I0817 21:47:18.855676  226555 out.go:177]   - NO_PROXY=192.168.39.104,192.168.39.175
	W0817 21:47:18.857450  226555 proxy.go:119] fail to check proxy env: Error ip not in block
	W0817 21:47:18.857475  226555 proxy.go:119] fail to check proxy env: Error ip not in block
	I0817 21:47:18.857496  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .DriverName
	I0817 21:47:18.858317  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .DriverName
	I0817 21:47:18.858548  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .DriverName
	I0817 21:47:18.858673  226555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:47:18.858712  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHHostname
	W0817 21:47:18.858808  226555 proxy.go:119] fail to check proxy env: Error ip not in block
	W0817 21:47:18.858834  226555 proxy.go:119] fail to check proxy env: Error ip not in block
	I0817 21:47:18.858913  226555 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:47:18.858935  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHHostname
	I0817 21:47:18.861578  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.861697  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.861980  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:47:18.862018  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.862149  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHPort
	I0817 21:47:18.862206  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:47:18.862243  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:18.862384  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:47:18.862392  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHPort
	I0817 21:47:18.862555  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHUsername
	I0817 21:47:18.862653  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHKeyPath
	I0817 21:47:18.862715  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m03/id_rsa Username:docker}
	I0817 21:47:18.862799  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetSSHUsername
	I0817 21:47:18.862950  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m03/id_rsa Username:docker}
	I0817 21:47:19.101937  226555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:47:19.101937  226555 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0817 21:47:19.108610  226555 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0817 21:47:19.108667  226555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 21:47:19.108741  226555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:47:19.118737  226555 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0817 21:47:19.118773  226555 start.go:466] detecting cgroup driver to use...
	I0817 21:47:19.118856  226555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:47:19.134026  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:47:19.147828  226555 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:47:19.147909  226555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:47:19.163775  226555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:47:19.177735  226555 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:47:19.312840  226555 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:47:19.444963  226555 docker.go:212] disabling docker service ...
	I0817 21:47:19.445046  226555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:47:19.460578  226555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:47:19.474649  226555 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:47:19.604054  226555 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:47:19.733079  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:47:19.748171  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:47:19.766907  226555 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0817 21:47:19.767495  226555 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 21:47:19.767575  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:47:19.779221  226555 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:47:19.779318  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:47:19.791276  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:47:19.801407  226555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:47:19.812649  226555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:47:19.825499  226555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:47:19.835541  226555 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0817 21:47:19.835620  226555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:47:19.846892  226555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:47:19.983811  226555 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:47:20.232074  226555 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:47:20.232147  226555 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:47:20.237718  226555 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0817 21:47:20.237741  226555 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0817 21:47:20.237755  226555 command_runner.go:130] > Device: 16h/22d	Inode: 1210        Links: 1
	I0817 21:47:20.237761  226555 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:47:20.237766  226555 command_runner.go:130] > Access: 2023-08-17 21:47:20.147842202 +0000
	I0817 21:47:20.237772  226555 command_runner.go:130] > Modify: 2023-08-17 21:47:20.147842202 +0000
	I0817 21:47:20.237777  226555 command_runner.go:130] > Change: 2023-08-17 21:47:20.147842202 +0000
	I0817 21:47:20.237780  226555 command_runner.go:130] >  Birth: -
	I0817 21:47:20.238033  226555 start.go:534] Will wait 60s for crictl version
	I0817 21:47:20.238108  226555 ssh_runner.go:195] Run: which crictl
	I0817 21:47:20.242447  226555 command_runner.go:130] > /usr/bin/crictl
	I0817 21:47:20.242714  226555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:47:20.283564  226555 command_runner.go:130] > Version:  0.1.0
	I0817 21:47:20.283593  226555 command_runner.go:130] > RuntimeName:  cri-o
	I0817 21:47:20.283599  226555 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0817 21:47:20.283610  226555 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0817 21:47:20.283680  226555 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 21:47:20.283764  226555 ssh_runner.go:195] Run: crio --version
	I0817 21:47:20.333802  226555 command_runner.go:130] > crio version 1.24.1
	I0817 21:47:20.333824  226555 command_runner.go:130] > Version:          1.24.1
	I0817 21:47:20.333838  226555 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:47:20.333845  226555 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:47:20.333853  226555 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:47:20.333860  226555 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:47:20.333865  226555 command_runner.go:130] > Compiler:         gc
	I0817 21:47:20.333880  226555 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:47:20.333888  226555 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:47:20.333898  226555 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:47:20.333908  226555 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:47:20.333918  226555 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:47:20.333994  226555 ssh_runner.go:195] Run: crio --version
	I0817 21:47:20.389709  226555 command_runner.go:130] > crio version 1.24.1
	I0817 21:47:20.389733  226555 command_runner.go:130] > Version:          1.24.1
	I0817 21:47:20.389740  226555 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0817 21:47:20.389745  226555 command_runner.go:130] > GitTreeState:     dirty
	I0817 21:47:20.389752  226555 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0817 21:47:20.389756  226555 command_runner.go:130] > GoVersion:        go1.19.9
	I0817 21:47:20.389760  226555 command_runner.go:130] > Compiler:         gc
	I0817 21:47:20.389778  226555 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:47:20.389783  226555 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:47:20.389790  226555 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:47:20.389795  226555 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:47:20.389799  226555 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:47:20.393924  226555 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 21:47:20.395760  226555 out.go:177]   - env NO_PROXY=192.168.39.104
	I0817 21:47:20.397633  226555 out.go:177]   - env NO_PROXY=192.168.39.104,192.168.39.175
	I0817 21:47:20.399263  226555 main.go:141] libmachine: (multinode-959371-m03) Calling .GetIP
	I0817 21:47:20.402105  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:20.402533  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:93", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:35:46 +0000 UTC Type:0 Mac:52:54:00:82:94:93 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-959371-m03 Clientid:01:52:54:00:82:94:93}
	I0817 21:47:20.402572  226555 main.go:141] libmachine: (multinode-959371-m03) DBG | domain multinode-959371-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:82:94:93 in network mk-multinode-959371
	I0817 21:47:20.402877  226555 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 21:47:20.407842  226555 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0817 21:47:20.407899  226555 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371 for IP: 192.168.39.227
	I0817 21:47:20.407920  226555 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:47:20.408074  226555 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 21:47:20.408137  226555 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 21:47:20.408154  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:47:20.408175  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:47:20.408190  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:47:20.408202  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:47:20.408267  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 21:47:20.408317  226555 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 21:47:20.408330  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:47:20.408357  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 21:47:20.408400  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:47:20.408433  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 21:47:20.408489  226555 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 21:47:20.408528  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> /usr/share/ca-certificates/2106702.pem
	I0817 21:47:20.408549  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:47:20.408568  226555 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem -> /usr/share/ca-certificates/210670.pem
	I0817 21:47:20.409064  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:47:20.436079  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:47:20.461992  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:47:20.488751  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:47:20.514222  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 21:47:20.539610  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:47:20.564072  226555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 21:47:20.587781  226555 ssh_runner.go:195] Run: openssl version
	I0817 21:47:20.594063  226555 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0817 21:47:20.594189  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 21:47:20.605791  226555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 21:47:20.611030  226555 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:47:20.611070  226555 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 21:47:20.611129  226555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 21:47:20.616886  226555 command_runner.go:130] > 3ec20f2e
	I0817 21:47:20.617130  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:47:20.627401  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:47:20.639337  226555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:47:20.644603  226555 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:47:20.644643  226555 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:47:20.644700  226555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:47:20.650724  226555 command_runner.go:130] > b5213941
	I0817 21:47:20.650890  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:47:20.661315  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 21:47:20.672977  226555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 21:47:20.677962  226555 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:47:20.678040  226555 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 21:47:20.678112  226555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 21:47:20.683756  226555 command_runner.go:130] > 51391683
	I0817 21:47:20.684022  226555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 21:47:20.694894  226555 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:47:20.699645  226555 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:47:20.699689  226555 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:47:20.699784  226555 ssh_runner.go:195] Run: crio config
	I0817 21:47:20.758962  226555 command_runner.go:130] ! time="2023-08-17 21:47:20.753731052Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0817 21:47:20.759039  226555 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0817 21:47:20.768920  226555 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0817 21:47:20.768953  226555 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0817 21:47:20.768965  226555 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0817 21:47:20.768971  226555 command_runner.go:130] > #
	I0817 21:47:20.768983  226555 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0817 21:47:20.768993  226555 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0817 21:47:20.768999  226555 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0817 21:47:20.769010  226555 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0817 21:47:20.769021  226555 command_runner.go:130] > # reload'.
	I0817 21:47:20.769027  226555 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0817 21:47:20.769034  226555 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0817 21:47:20.769045  226555 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0817 21:47:20.769054  226555 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0817 21:47:20.769060  226555 command_runner.go:130] > [crio]
	I0817 21:47:20.769072  226555 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0817 21:47:20.769082  226555 command_runner.go:130] > # containers images, in this directory.
	I0817 21:47:20.769090  226555 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0817 21:47:20.769104  226555 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0817 21:47:20.769112  226555 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0817 21:47:20.769118  226555 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0817 21:47:20.769128  226555 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0817 21:47:20.769139  226555 command_runner.go:130] > storage_driver = "overlay"
	I0817 21:47:20.769151  226555 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0817 21:47:20.769165  226555 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0817 21:47:20.769175  226555 command_runner.go:130] > storage_option = [
	I0817 21:47:20.769183  226555 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0817 21:47:20.769192  226555 command_runner.go:130] > ]
	I0817 21:47:20.769199  226555 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0817 21:47:20.769206  226555 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0817 21:47:20.769217  226555 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0817 21:47:20.769227  226555 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0817 21:47:20.769241  226555 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0817 21:47:20.769262  226555 command_runner.go:130] > # always happen on a node reboot
	I0817 21:47:20.769272  226555 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0817 21:47:20.769284  226555 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0817 21:47:20.769296  226555 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0817 21:47:20.769313  226555 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0817 21:47:20.769325  226555 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0817 21:47:20.769341  226555 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0817 21:47:20.769358  226555 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0817 21:47:20.769367  226555 command_runner.go:130] > # internal_wipe = true
	I0817 21:47:20.769373  226555 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0817 21:47:20.769382  226555 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0817 21:47:20.769387  226555 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0817 21:47:20.769399  226555 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0817 21:47:20.769405  226555 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0817 21:47:20.769409  226555 command_runner.go:130] > [crio.api]
	I0817 21:47:20.769415  226555 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0817 21:47:20.769420  226555 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0817 21:47:20.769425  226555 command_runner.go:130] > # IP address on which the stream server will listen.
	I0817 21:47:20.769432  226555 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0817 21:47:20.769438  226555 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0817 21:47:20.769446  226555 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0817 21:47:20.769450  226555 command_runner.go:130] > # stream_port = "0"
	I0817 21:47:20.769457  226555 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0817 21:47:20.769462  226555 command_runner.go:130] > # stream_enable_tls = false
	I0817 21:47:20.769469  226555 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0817 21:47:20.769475  226555 command_runner.go:130] > # stream_idle_timeout = ""
	I0817 21:47:20.769481  226555 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0817 21:47:20.769489  226555 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0817 21:47:20.769506  226555 command_runner.go:130] > # minutes.
	I0817 21:47:20.769510  226555 command_runner.go:130] > # stream_tls_cert = ""
	I0817 21:47:20.769516  226555 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0817 21:47:20.769522  226555 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0817 21:47:20.769526  226555 command_runner.go:130] > # stream_tls_key = ""
	I0817 21:47:20.769532  226555 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0817 21:47:20.769538  226555 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0817 21:47:20.769543  226555 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0817 21:47:20.769548  226555 command_runner.go:130] > # stream_tls_ca = ""
	I0817 21:47:20.769556  226555 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:47:20.769561  226555 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0817 21:47:20.769570  226555 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:47:20.769577  226555 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0817 21:47:20.769599  226555 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0817 21:47:20.769608  226555 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0817 21:47:20.769612  226555 command_runner.go:130] > [crio.runtime]
	I0817 21:47:20.769617  226555 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0817 21:47:20.769626  226555 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0817 21:47:20.769632  226555 command_runner.go:130] > # "nofile=1024:2048"
	I0817 21:47:20.769638  226555 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0817 21:47:20.769645  226555 command_runner.go:130] > # default_ulimits = [
	I0817 21:47:20.769648  226555 command_runner.go:130] > # ]
	I0817 21:47:20.769657  226555 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0817 21:47:20.769661  226555 command_runner.go:130] > # no_pivot = false
	I0817 21:47:20.769669  226555 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0817 21:47:20.769677  226555 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0817 21:47:20.769682  226555 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0817 21:47:20.769687  226555 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0817 21:47:20.769694  226555 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0817 21:47:20.769707  226555 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:47:20.769718  226555 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0817 21:47:20.769729  226555 command_runner.go:130] > # Cgroup setting for conmon
	I0817 21:47:20.769737  226555 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0817 21:47:20.769743  226555 command_runner.go:130] > conmon_cgroup = "pod"
	I0817 21:47:20.769751  226555 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0817 21:47:20.769759  226555 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0817 21:47:20.769766  226555 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:47:20.769773  226555 command_runner.go:130] > conmon_env = [
	I0817 21:47:20.769778  226555 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0817 21:47:20.769784  226555 command_runner.go:130] > ]
	I0817 21:47:20.769791  226555 command_runner.go:130] > # Additional environment variables to set for all the
	I0817 21:47:20.769803  226555 command_runner.go:130] > # containers. These are overridden if set in the
	I0817 21:47:20.769816  226555 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0817 21:47:20.769825  226555 command_runner.go:130] > # default_env = [
	I0817 21:47:20.769830  226555 command_runner.go:130] > # ]
	I0817 21:47:20.769839  226555 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0817 21:47:20.769843  226555 command_runner.go:130] > # selinux = false
	I0817 21:47:20.769849  226555 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0817 21:47:20.769857  226555 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0817 21:47:20.769863  226555 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0817 21:47:20.769870  226555 command_runner.go:130] > # seccomp_profile = ""
	I0817 21:47:20.769876  226555 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0817 21:47:20.769889  226555 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0817 21:47:20.769904  226555 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0817 21:47:20.769914  226555 command_runner.go:130] > # which might increase security.
	I0817 21:47:20.769923  226555 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0817 21:47:20.769930  226555 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0817 21:47:20.769938  226555 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0817 21:47:20.769944  226555 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0817 21:47:20.769953  226555 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0817 21:47:20.769960  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:47:20.769968  226555 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0817 21:47:20.769981  226555 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0817 21:47:20.769992  226555 command_runner.go:130] > # the cgroup blockio controller.
	I0817 21:47:20.770002  226555 command_runner.go:130] > # blockio_config_file = ""
	I0817 21:47:20.770013  226555 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0817 21:47:20.770023  226555 command_runner.go:130] > # irqbalance daemon.
	I0817 21:47:20.770029  226555 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0817 21:47:20.770036  226555 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0817 21:47:20.770041  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:47:20.770045  226555 command_runner.go:130] > # rdt_config_file = ""
	I0817 21:47:20.770066  226555 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0817 21:47:20.770076  226555 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0817 21:47:20.770087  226555 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0817 21:47:20.770097  226555 command_runner.go:130] > # separate_pull_cgroup = ""
	I0817 21:47:20.770112  226555 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0817 21:47:20.770125  226555 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0817 21:47:20.770131  226555 command_runner.go:130] > # will be added.
	I0817 21:47:20.770136  226555 command_runner.go:130] > # default_capabilities = [
	I0817 21:47:20.770145  226555 command_runner.go:130] > # 	"CHOWN",
	I0817 21:47:20.770154  226555 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0817 21:47:20.770164  226555 command_runner.go:130] > # 	"FSETID",
	I0817 21:47:20.770170  226555 command_runner.go:130] > # 	"FOWNER",
	I0817 21:47:20.770179  226555 command_runner.go:130] > # 	"SETGID",
	I0817 21:47:20.770186  226555 command_runner.go:130] > # 	"SETUID",
	I0817 21:47:20.770195  226555 command_runner.go:130] > # 	"SETPCAP",
	I0817 21:47:20.770201  226555 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0817 21:47:20.770210  226555 command_runner.go:130] > # 	"KILL",
	I0817 21:47:20.770214  226555 command_runner.go:130] > # ]
	I0817 21:47:20.770225  226555 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0817 21:47:20.770237  226555 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:47:20.770252  226555 command_runner.go:130] > # default_sysctls = [
	I0817 21:47:20.770258  226555 command_runner.go:130] > # ]
	I0817 21:47:20.770269  226555 command_runner.go:130] > # List of devices on the host that a
	I0817 21:47:20.770282  226555 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0817 21:47:20.770291  226555 command_runner.go:130] > # allowed_devices = [
	I0817 21:47:20.770296  226555 command_runner.go:130] > # 	"/dev/fuse",
	I0817 21:47:20.770300  226555 command_runner.go:130] > # ]
	I0817 21:47:20.770306  226555 command_runner.go:130] > # List of additional devices. specified as
	I0817 21:47:20.770322  226555 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0817 21:47:20.770335  226555 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0817 21:47:20.770369  226555 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:47:20.770380  226555 command_runner.go:130] > # additional_devices = [
	I0817 21:47:20.770385  226555 command_runner.go:130] > # ]
	I0817 21:47:20.770390  226555 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0817 21:47:20.770395  226555 command_runner.go:130] > # cdi_spec_dirs = [
	I0817 21:47:20.770404  226555 command_runner.go:130] > # 	"/etc/cdi",
	I0817 21:47:20.770411  226555 command_runner.go:130] > # 	"/var/run/cdi",
	I0817 21:47:20.770420  226555 command_runner.go:130] > # ]
	I0817 21:47:20.770431  226555 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0817 21:47:20.770444  226555 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0817 21:47:20.770453  226555 command_runner.go:130] > # Defaults to false.
	I0817 21:47:20.770462  226555 command_runner.go:130] > # device_ownership_from_security_context = false
	I0817 21:47:20.770473  226555 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0817 21:47:20.770479  226555 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0817 21:47:20.770488  226555 command_runner.go:130] > # hooks_dir = [
	I0817 21:47:20.770498  226555 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0817 21:47:20.770507  226555 command_runner.go:130] > # ]
	I0817 21:47:20.770517  226555 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0817 21:47:20.770530  226555 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0817 21:47:20.770542  226555 command_runner.go:130] > # its default mounts from the following two files:
	I0817 21:47:20.770548  226555 command_runner.go:130] > #
	I0817 21:47:20.770557  226555 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0817 21:47:20.770569  226555 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0817 21:47:20.770582  226555 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0817 21:47:20.770590  226555 command_runner.go:130] > #
	I0817 21:47:20.770601  226555 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0817 21:47:20.770614  226555 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0817 21:47:20.770628  226555 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0817 21:47:20.770639  226555 command_runner.go:130] > #      only add mounts it finds in this file.
	I0817 21:47:20.770646  226555 command_runner.go:130] > #
	I0817 21:47:20.770651  226555 command_runner.go:130] > # default_mounts_file = ""
	I0817 21:47:20.770664  226555 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0817 21:47:20.770679  226555 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0817 21:47:20.770686  226555 command_runner.go:130] > pids_limit = 1024
	I0817 21:47:20.770700  226555 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0817 21:47:20.770713  226555 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0817 21:47:20.770726  226555 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0817 21:47:20.770737  226555 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0817 21:47:20.770747  226555 command_runner.go:130] > # log_size_max = -1
	I0817 21:47:20.770762  226555 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0817 21:47:20.770773  226555 command_runner.go:130] > # log_to_journald = false
	I0817 21:47:20.770784  226555 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0817 21:47:20.770795  226555 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0817 21:47:20.770806  226555 command_runner.go:130] > # Path to directory for container attach sockets.
	I0817 21:47:20.770818  226555 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0817 21:47:20.770829  226555 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0817 21:47:20.770840  226555 command_runner.go:130] > # bind_mount_prefix = ""
	I0817 21:47:20.770853  226555 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0817 21:47:20.770863  226555 command_runner.go:130] > # read_only = false
	I0817 21:47:20.770874  226555 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0817 21:47:20.770887  226555 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0817 21:47:20.770899  226555 command_runner.go:130] > # live configuration reload.
	I0817 21:47:20.770906  226555 command_runner.go:130] > # log_level = "info"
	I0817 21:47:20.770913  226555 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0817 21:47:20.770925  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:47:20.770936  226555 command_runner.go:130] > # log_filter = ""
	I0817 21:47:20.770949  226555 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0817 21:47:20.770962  226555 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0817 21:47:20.770975  226555 command_runner.go:130] > # separated by comma.
	I0817 21:47:20.770984  226555 command_runner.go:130] > # uid_mappings = ""
	I0817 21:47:20.770994  226555 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0817 21:47:20.771007  226555 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0817 21:47:20.771018  226555 command_runner.go:130] > # separated by comma.
	I0817 21:47:20.771027  226555 command_runner.go:130] > # gid_mappings = ""
	I0817 21:47:20.771040  226555 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0817 21:47:20.771054  226555 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:47:20.771068  226555 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:47:20.771077  226555 command_runner.go:130] > # minimum_mappable_uid = -1
	I0817 21:47:20.771085  226555 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0817 21:47:20.771100  226555 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:47:20.771114  226555 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:47:20.771123  226555 command_runner.go:130] > # minimum_mappable_gid = -1
	I0817 21:47:20.771136  226555 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0817 21:47:20.771149  226555 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0817 21:47:20.771159  226555 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0817 21:47:20.771166  226555 command_runner.go:130] > # ctr_stop_timeout = 30
	I0817 21:47:20.771175  226555 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0817 21:47:20.771189  226555 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0817 21:47:20.771201  226555 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0817 21:47:20.771212  226555 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0817 21:47:20.771228  226555 command_runner.go:130] > drop_infra_ctr = false
	I0817 21:47:20.771241  226555 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0817 21:47:20.771261  226555 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0817 21:47:20.771277  226555 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0817 21:47:20.771288  226555 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0817 21:47:20.771301  226555 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0817 21:47:20.771312  226555 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0817 21:47:20.771322  226555 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0817 21:47:20.771334  226555 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0817 21:47:20.771342  226555 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0817 21:47:20.771356  226555 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0817 21:47:20.771371  226555 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0817 21:47:20.771384  226555 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0817 21:47:20.771394  226555 command_runner.go:130] > # default_runtime = "runc"
	I0817 21:47:20.771406  226555 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0817 21:47:20.771419  226555 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0817 21:47:20.771434  226555 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0817 21:47:20.771446  226555 command_runner.go:130] > # creation as a file is not desired either.
	I0817 21:47:20.771464  226555 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0817 21:47:20.771476  226555 command_runner.go:130] > # the hostname is being managed dynamically.
	I0817 21:47:20.771485  226555 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0817 21:47:20.771493  226555 command_runner.go:130] > # ]
	I0817 21:47:20.771505  226555 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0817 21:47:20.771516  226555 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0817 21:47:20.771531  226555 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0817 21:47:20.771545  226555 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0817 21:47:20.771553  226555 command_runner.go:130] > #
	I0817 21:47:20.771565  226555 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0817 21:47:20.771577  226555 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0817 21:47:20.771586  226555 command_runner.go:130] > #  runtime_type = "oci"
	I0817 21:47:20.771594  226555 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0817 21:47:20.771600  226555 command_runner.go:130] > #  privileged_without_host_devices = false
	I0817 21:47:20.771611  226555 command_runner.go:130] > #  allowed_annotations = []
	I0817 21:47:20.771620  226555 command_runner.go:130] > # Where:
	I0817 21:47:20.771632  226555 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0817 21:47:20.771647  226555 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0817 21:47:20.771661  226555 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0817 21:47:20.771673  226555 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0817 21:47:20.771679  226555 command_runner.go:130] > #   in $PATH.
	I0817 21:47:20.771688  226555 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0817 21:47:20.771700  226555 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0817 21:47:20.771714  226555 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0817 21:47:20.771723  226555 command_runner.go:130] > #   state.
	I0817 21:47:20.771738  226555 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0817 21:47:20.771751  226555 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0817 21:47:20.771762  226555 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0817 21:47:20.771771  226555 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0817 21:47:20.771784  226555 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0817 21:47:20.771800  226555 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0817 21:47:20.771811  226555 command_runner.go:130] > #   The currently recognized values are:
	I0817 21:47:20.771825  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0817 21:47:20.771840  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0817 21:47:20.771850  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0817 21:47:20.771861  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0817 21:47:20.771877  226555 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0817 21:47:20.771892  226555 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0817 21:47:20.771906  226555 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0817 21:47:20.771920  226555 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0817 21:47:20.771930  226555 command_runner.go:130] > #   should be moved to the container's cgroup
	I0817 21:47:20.771937  226555 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0817 21:47:20.771943  226555 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0817 21:47:20.771953  226555 command_runner.go:130] > runtime_type = "oci"
	I0817 21:47:20.771964  226555 command_runner.go:130] > runtime_root = "/run/runc"
	I0817 21:47:20.771974  226555 command_runner.go:130] > runtime_config_path = ""
	I0817 21:47:20.771983  226555 command_runner.go:130] > monitor_path = ""
	I0817 21:47:20.771993  226555 command_runner.go:130] > monitor_cgroup = ""
	I0817 21:47:20.772003  226555 command_runner.go:130] > monitor_exec_cgroup = ""
	I0817 21:47:20.772015  226555 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0817 21:47:20.772022  226555 command_runner.go:130] > # running containers
	I0817 21:47:20.772027  226555 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0817 21:47:20.772041  226555 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0817 21:47:20.772112  226555 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0817 21:47:20.772128  226555 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0817 21:47:20.772137  226555 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0817 21:47:20.772145  226555 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0817 21:47:20.772156  226555 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0817 21:47:20.772167  226555 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0817 21:47:20.772176  226555 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0817 21:47:20.772186  226555 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0817 21:47:20.772194  226555 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0817 21:47:20.772206  226555 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0817 21:47:20.772218  226555 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0817 21:47:20.772236  226555 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0817 21:47:20.772256  226555 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0817 21:47:20.772269  226555 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0817 21:47:20.772281  226555 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0817 21:47:20.772297  226555 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0817 21:47:20.772311  226555 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0817 21:47:20.772324  226555 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0817 21:47:20.772334  226555 command_runner.go:130] > # Example:
	I0817 21:47:20.772342  226555 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0817 21:47:20.772353  226555 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0817 21:47:20.772364  226555 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0817 21:47:20.772377  226555 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0817 21:47:20.772386  226555 command_runner.go:130] > # cpuset = 0
	I0817 21:47:20.772393  226555 command_runner.go:130] > # cpushares = "0-1"
	I0817 21:47:20.772404  226555 command_runner.go:130] > # Where:
	I0817 21:47:20.772413  226555 command_runner.go:130] > # The workload name is workload-type.
	I0817 21:47:20.772426  226555 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0817 21:47:20.772436  226555 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0817 21:47:20.772445  226555 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0817 21:47:20.772460  226555 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0817 21:47:20.772472  226555 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0817 21:47:20.772480  226555 command_runner.go:130] > # 
	I0817 21:47:20.772491  226555 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0817 21:47:20.772499  226555 command_runner.go:130] > #
	I0817 21:47:20.772510  226555 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0817 21:47:20.772527  226555 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0817 21:47:20.772540  226555 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0817 21:47:20.772547  226555 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0817 21:47:20.772556  226555 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0817 21:47:20.772559  226555 command_runner.go:130] > [crio.image]
	I0817 21:47:20.772566  226555 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0817 21:47:20.772571  226555 command_runner.go:130] > # default_transport = "docker://"
	I0817 21:47:20.772579  226555 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0817 21:47:20.772590  226555 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:47:20.772597  226555 command_runner.go:130] > # global_auth_file = ""
	I0817 21:47:20.772602  226555 command_runner.go:130] > # The image used to instantiate infra containers.
	I0817 21:47:20.772609  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:47:20.772616  226555 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0817 21:47:20.772622  226555 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0817 21:47:20.772642  226555 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:47:20.772651  226555 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:47:20.772656  226555 command_runner.go:130] > # pause_image_auth_file = ""
	I0817 21:47:20.772664  226555 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0817 21:47:20.772670  226555 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0817 21:47:20.772678  226555 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0817 21:47:20.772684  226555 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0817 21:47:20.772690  226555 command_runner.go:130] > # pause_command = "/pause"
	I0817 21:47:20.772697  226555 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0817 21:47:20.772705  226555 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0817 21:47:20.772712  226555 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0817 21:47:20.772720  226555 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0817 21:47:20.772725  226555 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0817 21:47:20.772730  226555 command_runner.go:130] > # signature_policy = ""
	I0817 21:47:20.772736  226555 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0817 21:47:20.772747  226555 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0817 21:47:20.772757  226555 command_runner.go:130] > # changing them here.
	I0817 21:47:20.772763  226555 command_runner.go:130] > # insecure_registries = [
	I0817 21:47:20.772771  226555 command_runner.go:130] > # ]
	I0817 21:47:20.772789  226555 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0817 21:47:20.772801  226555 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0817 21:47:20.772808  226555 command_runner.go:130] > # image_volumes = "mkdir"
	I0817 21:47:20.772820  226555 command_runner.go:130] > # Temporary directory to use for storing big files
	I0817 21:47:20.772829  226555 command_runner.go:130] > # big_files_temporary_dir = ""
	I0817 21:47:20.772842  226555 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0817 21:47:20.772852  226555 command_runner.go:130] > # CNI plugins.
	I0817 21:47:20.772857  226555 command_runner.go:130] > [crio.network]
	I0817 21:47:20.772871  226555 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0817 21:47:20.772883  226555 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0817 21:47:20.772894  226555 command_runner.go:130] > # cni_default_network = ""
	I0817 21:47:20.772905  226555 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0817 21:47:20.772915  226555 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0817 21:47:20.772924  226555 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0817 21:47:20.772931  226555 command_runner.go:130] > # plugin_dirs = [
	I0817 21:47:20.772938  226555 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0817 21:47:20.772947  226555 command_runner.go:130] > # ]
	I0817 21:47:20.772956  226555 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0817 21:47:20.772965  226555 command_runner.go:130] > [crio.metrics]
	I0817 21:47:20.772974  226555 command_runner.go:130] > # Globally enable or disable metrics support.
	I0817 21:47:20.772983  226555 command_runner.go:130] > enable_metrics = true
	I0817 21:47:20.772991  226555 command_runner.go:130] > # Specify enabled metrics collectors.
	I0817 21:47:20.773002  226555 command_runner.go:130] > # Per default all metrics are enabled.
	I0817 21:47:20.773012  226555 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0817 21:47:20.773026  226555 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0817 21:47:20.773038  226555 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0817 21:47:20.773048  226555 command_runner.go:130] > # metrics_collectors = [
	I0817 21:47:20.773057  226555 command_runner.go:130] > # 	"operations",
	I0817 21:47:20.773066  226555 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0817 21:47:20.773084  226555 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0817 21:47:20.773094  226555 command_runner.go:130] > # 	"operations_errors",
	I0817 21:47:20.773101  226555 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0817 21:47:20.773111  226555 command_runner.go:130] > # 	"image_pulls_by_name",
	I0817 21:47:20.773118  226555 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0817 21:47:20.773128  226555 command_runner.go:130] > # 	"image_pulls_failures",
	I0817 21:47:20.773135  226555 command_runner.go:130] > # 	"image_pulls_successes",
	I0817 21:47:20.773145  226555 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0817 21:47:20.773156  226555 command_runner.go:130] > # 	"image_layer_reuse",
	I0817 21:47:20.773164  226555 command_runner.go:130] > # 	"containers_oom_total",
	I0817 21:47:20.773174  226555 command_runner.go:130] > # 	"containers_oom",
	I0817 21:47:20.773181  226555 command_runner.go:130] > # 	"processes_defunct",
	I0817 21:47:20.773190  226555 command_runner.go:130] > # 	"operations_total",
	I0817 21:47:20.773197  226555 command_runner.go:130] > # 	"operations_latency_seconds",
	I0817 21:47:20.773209  226555 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0817 21:47:20.773219  226555 command_runner.go:130] > # 	"operations_errors_total",
	I0817 21:47:20.773226  226555 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0817 21:47:20.773237  226555 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0817 21:47:20.773257  226555 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0817 21:47:20.773268  226555 command_runner.go:130] > # 	"image_pulls_success_total",
	I0817 21:47:20.773278  226555 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0817 21:47:20.773288  226555 command_runner.go:130] > # 	"containers_oom_count_total",
	I0817 21:47:20.773296  226555 command_runner.go:130] > # ]
	I0817 21:47:20.773304  226555 command_runner.go:130] > # The port on which the metrics server will listen.
	I0817 21:47:20.773314  226555 command_runner.go:130] > # metrics_port = 9090
	I0817 21:47:20.773323  226555 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0817 21:47:20.773332  226555 command_runner.go:130] > # metrics_socket = ""
	I0817 21:47:20.773340  226555 command_runner.go:130] > # The certificate for the secure metrics server.
	I0817 21:47:20.773353  226555 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0817 21:47:20.773366  226555 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0817 21:47:20.773377  226555 command_runner.go:130] > # certificate on any modification event.
	I0817 21:47:20.773388  226555 command_runner.go:130] > # metrics_cert = ""
	I0817 21:47:20.773397  226555 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0817 21:47:20.773407  226555 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0817 21:47:20.773416  226555 command_runner.go:130] > # metrics_key = ""
	I0817 21:47:20.773428  226555 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0817 21:47:20.773441  226555 command_runner.go:130] > [crio.tracing]
	I0817 21:47:20.773454  226555 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0817 21:47:20.773463  226555 command_runner.go:130] > # enable_tracing = false
	I0817 21:47:20.773472  226555 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0817 21:47:20.773483  226555 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0817 21:47:20.773494  226555 command_runner.go:130] > # Number of samples to collect per million spans.
	I0817 21:47:20.773502  226555 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0817 21:47:20.773515  226555 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0817 21:47:20.773524  226555 command_runner.go:130] > [crio.stats]
	I0817 21:47:20.773534  226555 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0817 21:47:20.773545  226555 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0817 21:47:20.773556  226555 command_runner.go:130] > # stats_collection_period = 0
	I0817 21:47:20.773650  226555 cni.go:84] Creating CNI manager for ""
	I0817 21:47:20.773666  226555 cni.go:136] 3 nodes found, recommending kindnet
	I0817 21:47:20.773681  226555 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:47:20.773708  226555 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-959371 NodeName:multinode-959371-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:47:20.773874  226555 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-959371-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:47:20.773925  226555 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-959371-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:47:20.773993  226555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:47:20.784649  226555 command_runner.go:130] > kubeadm
	I0817 21:47:20.784684  226555 command_runner.go:130] > kubectl
	I0817 21:47:20.784689  226555 command_runner.go:130] > kubelet
	I0817 21:47:20.784720  226555 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:47:20.784801  226555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0817 21:47:20.796472  226555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0817 21:47:20.814471  226555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:47:20.834010  226555 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0817 21:47:20.838181  226555 command_runner.go:130] > 192.168.39.104	control-plane.minikube.internal
	I0817 21:47:20.838559  226555 host.go:66] Checking if "multinode-959371" exists ...
	I0817 21:47:20.838965  226555 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:47:20.839060  226555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:47:20.839106  226555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:47:20.855080  226555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0817 21:47:20.855582  226555 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:47:20.856242  226555 main.go:141] libmachine: Using API Version  1
	I0817 21:47:20.856266  226555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:47:20.856577  226555 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:47:20.856788  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:47:20.856932  226555 start.go:301] JoinCluster: &{Name:multinode-959371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-959371 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.175 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false ist
io:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0}
	I0817 21:47:20.857096  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0817 21:47:20.857117  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:47:20.860188  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:47:20.860607  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:47:20.860636  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:47:20.860806  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:47:20.861002  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:47:20.861171  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:47:20.861309  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:47:21.057568  226555 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token httfw5.xqw85ig5bg72kuew --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 21:47:21.059840  226555 start.go:314] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0817 21:47:21.059911  226555 host.go:66] Checking if "multinode-959371" exists ...
	I0817 21:47:21.060294  226555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:47:21.060344  226555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:47:21.075626  226555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I0817 21:47:21.076102  226555 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:47:21.076620  226555 main.go:141] libmachine: Using API Version  1
	I0817 21:47:21.076639  226555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:47:21.077048  226555 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:47:21.077265  226555 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:47:21.077485  226555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl drain multinode-959371-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0817 21:47:21.077508  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:47:21.080363  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:47:21.080763  226555 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:43:18 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:47:21.080794  226555 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:47:21.081026  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:47:21.081223  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:47:21.081363  226555 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:47:21.081474  226555 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:47:21.278714  226555 command_runner.go:130] > node/multinode-959371-m03 cordoned
	I0817 21:47:24.325158  226555 command_runner.go:130] > pod "busybox-67b7f59bb-xx7f2" has DeletionTimestamp older than 1 seconds, skipping
	I0817 21:47:24.325190  226555 command_runner.go:130] > node/multinode-959371-m03 drained
	I0817 21:47:24.326988  226555 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0817 21:47:24.327012  226555 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-cmxkw, kube-system/kube-proxy-g94gj
	I0817 21:47:24.327038  226555 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl drain multinode-959371-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.249527188s)
	I0817 21:47:24.327058  226555 node.go:108] successfully drained node "m03"
	I0817 21:47:24.327560  226555 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:47:24.327811  226555 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:47:24.328218  226555 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0817 21:47:24.328271  226555 round_trippers.go:463] DELETE https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:47:24.328279  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:24.328287  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:24.328293  226555 round_trippers.go:473]     Content-Type: application/json
	I0817 21:47:24.328301  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:24.341758  226555 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0817 21:47:24.341789  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:24.341801  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:24.341810  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:24.341819  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:24.341827  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:24.341835  226555 round_trippers.go:580]     Content-Length: 171
	I0817 21:47:24.341843  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:24 GMT
	I0817 21:47:24.341851  226555 round_trippers.go:580]     Audit-Id: e28d830f-1ad8-42ad-b1c9-0352c7261892
	I0817 21:47:24.341881  226555 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-959371-m03","kind":"nodes","uid":"31bc1a59-dff0-4542-804e-a9c019ecd2f4"}}
	I0817 21:47:24.341950  226555 node.go:124] successfully deleted node "m03"
	I0817 21:47:24.341965  226555 start.go:318] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0817 21:47:24.341990  226555 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0817 21:47:24.342013  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token httfw5.xqw85ig5bg72kuew --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-959371-m03"
	I0817 21:47:24.399296  226555 command_runner.go:130] > [preflight] Running pre-flight checks
	I0817 21:47:24.576875  226555 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0817 21:47:24.576908  226555 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0817 21:47:24.634662  226555 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:47:24.634726  226555 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:47:24.634736  226555 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0817 21:47:24.791554  226555 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0817 21:47:25.325735  226555 command_runner.go:130] > This node has joined the cluster:
	I0817 21:47:25.325767  226555 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0817 21:47:25.325776  226555 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0817 21:47:25.325782  226555 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0817 21:47:25.329845  226555 command_runner.go:130] ! W0817 21:47:24.393944    2316 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0817 21:47:25.329880  226555 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0817 21:47:25.329890  226555 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0817 21:47:25.329902  226555 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0817 21:47:25.329930  226555 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0817 21:47:25.589628  226555 start.go:303] JoinCluster complete in 4.732683336s
	I0817 21:47:25.589673  226555 cni.go:84] Creating CNI manager for ""
	I0817 21:47:25.589681  226555 cni.go:136] 3 nodes found, recommending kindnet
	I0817 21:47:25.589754  226555 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:47:25.597238  226555 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0817 21:47:25.597274  226555 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0817 21:47:25.597281  226555 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0817 21:47:25.597287  226555 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:47:25.597293  226555 command_runner.go:130] > Access: 2023-08-17 21:43:18.834579600 +0000
	I0817 21:47:25.597298  226555 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0817 21:47:25.597307  226555 command_runner.go:130] > Change: 2023-08-17 21:43:16.782579600 +0000
	I0817 21:47:25.597311  226555 command_runner.go:130] >  Birth: -
	I0817 21:47:25.597828  226555 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:47:25.597853  226555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:47:25.617701  226555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:47:26.097113  226555 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:47:26.097154  226555 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:47:26.097163  226555 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0817 21:47:26.097203  226555 command_runner.go:130] > daemonset.apps/kindnet configured
	I0817 21:47:26.097646  226555 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:47:26.098027  226555 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:47:26.098550  226555 round_trippers.go:463] GET https://192.168.39.104:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:47:26.098572  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.098585  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.098596  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.101568  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.101588  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.101599  226555 round_trippers.go:580]     Audit-Id: 03fa1a65-ae99-429f-ba4e-6ab21f9f98a6
	I0817 21:47:26.101606  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.101618  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.101628  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.101637  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.101647  226555 round_trippers.go:580]     Content-Length: 291
	I0817 21:47:26.101661  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.101699  226555 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15a0cefe-1964-4cfb-951e-96eec4cbbba6","resourceVersion":"885","creationTimestamp":"2023-08-17T21:33:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0817 21:47:26.101826  226555 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-959371" context rescaled to 1 replicas
	I0817 21:47:26.101862  226555 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0817 21:47:26.104956  226555 out.go:177] * Verifying Kubernetes components...
	I0817 21:47:26.106519  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:47:26.120785  226555 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:47:26.121049  226555 kapi.go:59] client config for multinode-959371: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/multinode-959371/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:47:26.121343  226555 node_ready.go:35] waiting up to 6m0s for node "multinode-959371-m03" to be "Ready" ...
	I0817 21:47:26.121425  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:47:26.121437  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.121447  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.121456  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.124110  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.124138  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.124150  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.124158  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.124168  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.124177  226555 round_trippers.go:580]     Audit-Id: b52508d4-4a9e-45ca-b6e5-7c0cf4a7565d
	I0817 21:47:26.124186  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.124195  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.124515  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m03","uid":"69a9ea8e-a783-4d7a-8e4b-dad6890b2c1d","resourceVersion":"1203","creationTimestamp":"2023-08-17T21:47:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:47:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:47:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0817 21:47:26.124833  226555 node_ready.go:49] node "multinode-959371-m03" has status "Ready":"True"
	I0817 21:47:26.124849  226555 node_ready.go:38] duration metric: took 3.489154ms waiting for node "multinode-959371-m03" to be "Ready" ...
	I0817 21:47:26.124858  226555 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:47:26.124926  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I0817 21:47:26.124936  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.124943  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.124953  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.128810  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:47:26.128833  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.128840  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.128846  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.128851  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.128857  226555 round_trippers.go:580]     Audit-Id: f02bb52c-3627-42cf-94a1-c1e8481d83e9
	I0817 21:47:26.128862  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.128867  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.129620  226555 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1211"},"items":[{"metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"872","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82090 chars]
	I0817 21:47:26.132114  226555 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.132205  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-87rlb
	I0817 21:47:26.132214  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.132222  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.132231  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.134949  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.134973  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.134983  226555 round_trippers.go:580]     Audit-Id: 69eee46e-83b4-434e-a432-c8b378484c76
	I0817 21:47:26.134992  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.135001  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.135014  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.135024  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.135034  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.135155  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-87rlb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"52da85e0-72f0-4919-8615-d1cb46b65ca4","resourceVersion":"872","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c9ba5a5-ecc0-4e85-8f71-2432eef5fdd1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0817 21:47:26.135616  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:47:26.135631  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.135641  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.135651  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.137799  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.137814  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.137821  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.137826  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.137831  226555 round_trippers.go:580]     Audit-Id: 159f7d87-c10c-4879-b614-59823f9bc353
	I0817 21:47:26.137837  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.137845  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.137854  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.137978  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:47:26.138318  226555 pod_ready.go:92] pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace has status "Ready":"True"
	I0817 21:47:26.138334  226555 pod_ready.go:81] duration metric: took 6.196236ms waiting for pod "coredns-5d78c9869d-87rlb" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.138344  226555 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.138389  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-959371
	I0817 21:47:26.138400  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.138408  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.138414  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.140501  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.140519  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.140526  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.140531  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.140537  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.140542  226555 round_trippers.go:580]     Audit-Id: b0200c02-856d-4678-bf91-b5d1e100d792
	I0817 21:47:26.140547  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.140553  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.140660  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-959371","namespace":"kube-system","uid":"0ffe6db5-4285-4788-88b2-073753ece5f3","resourceVersion":"866","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.104:2379","kubernetes.io/config.hash":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.mirror":"524855ea42058e731111bcfa912d2dbe","kubernetes.io/config.seen":"2023-08-17T21:33:26.519088298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0817 21:47:26.141031  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:47:26.141042  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.141049  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.141056  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.143158  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.143181  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.143189  226555 round_trippers.go:580]     Audit-Id: 790344fa-71b4-4788-a65e-b1291d67fbd9
	I0817 21:47:26.143195  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.143200  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.143205  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.143210  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.143216  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.143399  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:47:26.143715  226555 pod_ready.go:92] pod "etcd-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:47:26.143727  226555 pod_ready.go:81] duration metric: took 5.378579ms waiting for pod "etcd-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.143744  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.143801  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-959371
	I0817 21:47:26.143809  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.143816  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.143822  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.146202  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.146228  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.146235  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.146241  226555 round_trippers.go:580]     Audit-Id: 0ffaeed0-fa2f-455e-94de-23efd9548339
	I0817 21:47:26.146246  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.146251  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.146263  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.146272  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.146568  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-959371","namespace":"kube-system","uid":"0efb1ae7-705a-47df-91c6-0d9390b68983","resourceVersion":"863","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.104:8443","kubernetes.io/config.hash":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.mirror":"1844dfd193c27ced8aa4dba039096475","kubernetes.io/config.seen":"2023-08-17T21:33:26.519082064Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0817 21:47:26.146978  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:47:26.146989  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.146997  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.147003  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.149361  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.149377  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.149383  226555 round_trippers.go:580]     Audit-Id: 70b103b7-2eb0-477a-9b3f-1a51e209ae5a
	I0817 21:47:26.149389  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.149394  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.149399  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.149405  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.149410  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.149841  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:47:26.150188  226555 pod_ready.go:92] pod "kube-apiserver-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:47:26.150203  226555 pod_ready.go:81] duration metric: took 6.453309ms waiting for pod "kube-apiserver-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.150212  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.150270  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-959371
	I0817 21:47:26.150278  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.150285  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.150291  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.152592  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.152608  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.152615  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.152620  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.152625  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.152642  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.152648  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.152660  226555 round_trippers.go:580]     Audit-Id: c349655a-7eb7-42d6-8421-f2ee5d7054d0
	I0817 21:47:26.152786  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-959371","namespace":"kube-system","uid":"00c79d6c-13de-44d6-9ac1-51ab2e4a4a8f","resourceVersion":"892","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.mirror":"8e691503605658781b8470b3d4d7c7b0","kubernetes.io/config.seen":"2023-08-17T21:33:26.519086461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0817 21:47:26.153157  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:47:26.153167  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.153174  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.153180  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.155211  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.155234  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.155245  226555 round_trippers.go:580]     Audit-Id: 1c5b9483-a79a-4594-b002-758907d886b9
	I0817 21:47:26.155253  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.155263  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.155271  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.155280  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.155288  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.155439  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:47:26.155728  226555 pod_ready.go:92] pod "kube-controller-manager-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:47:26.155741  226555 pod_ready.go:81] duration metric: took 5.523554ms waiting for pod "kube-controller-manager-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.155750  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.322135  226555 request.go:628] Waited for 166.304835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:47:26.322214  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gdf7
	I0817 21:47:26.322219  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.322227  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.322233  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.326268  226555 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:47:26.326300  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.326311  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.326320  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.326328  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.326337  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.326345  226555 round_trippers.go:580]     Audit-Id: 5d992693-333b-4462-9cd7-6613d2899d20
	I0817 21:47:26.326361  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.326589  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gdf7","generateName":"kube-proxy-","namespace":"kube-system","uid":"00e6f433-51d6-49bb-a927-780720361eb3","resourceVersion":"831","creationTimestamp":"2023-08-17T21:33:39Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0817 21:47:26.521558  226555 request.go:628] Waited for 194.337227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:47:26.521618  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:47:26.521623  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.521631  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.521637  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.524581  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:26.524609  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.524620  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.524628  226555 round_trippers.go:580]     Audit-Id: 95ce8503-a5a8-44d5-bede-0e0b8d4ad8fc
	I0817 21:47:26.524637  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.524645  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.524653  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.524660  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.524822  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:47:26.525289  226555 pod_ready.go:92] pod "kube-proxy-8gdf7" in "kube-system" namespace has status "Ready":"True"
	I0817 21:47:26.525313  226555 pod_ready.go:81] duration metric: took 369.55582ms waiting for pod "kube-proxy-8gdf7" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.525328  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g94gj" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:26.722336  226555 request.go:628] Waited for 196.903242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g94gj
	I0817 21:47:26.722420  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g94gj
	I0817 21:47:26.722431  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.722445  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.722459  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.727826  226555 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0817 21:47:26.727857  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.727869  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.727878  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.727887  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.727895  226555 round_trippers.go:580]     Audit-Id: 46b75647-e462-465d-b617-a54fa0f5b5c6
	I0817 21:47:26.727903  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.727915  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.728046  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g94gj","generateName":"kube-proxy-","namespace":"kube-system","uid":"050b1eab-a69f-4f6f-b3b8-f29ef38c9042","resourceVersion":"1208","creationTimestamp":"2023-08-17T21:35:12Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:35:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0817 21:47:26.922127  226555 request.go:628] Waited for 193.431363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:47:26.922213  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:47:26.922220  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:26.922232  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:26.922243  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:26.925534  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:47:26.925560  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:26.925569  226555 round_trippers.go:580]     Audit-Id: 48e6f48c-000a-4c18-93d1-be6646f85f6e
	I0817 21:47:26.925576  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:26.925583  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:26.925591  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:26.925600  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:26.925608  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:26 GMT
	I0817 21:47:26.925761  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m03","uid":"69a9ea8e-a783-4d7a-8e4b-dad6890b2c1d","resourceVersion":"1203","creationTimestamp":"2023-08-17T21:47:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:47:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:47:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0817 21:47:27.121550  226555 request.go:628] Waited for 195.326758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g94gj
	I0817 21:47:27.121623  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g94gj
	I0817 21:47:27.121646  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:27.121663  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:27.121675  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:27.124992  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:47:27.125019  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:27.125029  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:27.125037  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:27.125044  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:27.125051  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:27.125060  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:27 GMT
	I0817 21:47:27.125069  226555 round_trippers.go:580]     Audit-Id: 4b529e10-c7ac-4cf2-92da-00f8113c474d
	I0817 21:47:27.125204  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g94gj","generateName":"kube-proxy-","namespace":"kube-system","uid":"050b1eab-a69f-4f6f-b3b8-f29ef38c9042","resourceVersion":"1219","creationTimestamp":"2023-08-17T21:35:12Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:35:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0817 21:47:27.322159  226555 request.go:628] Waited for 196.479663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:47:27.322218  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m03
	I0817 21:47:27.322223  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:27.322230  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:27.322237  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:27.325078  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:27.325096  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:27.325103  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:27.325109  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:27.325114  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:27.325119  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:27.325126  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:27 GMT
	I0817 21:47:27.325131  226555 round_trippers.go:580]     Audit-Id: 6c0866eb-cca7-46ef-a269-8438eece804b
	I0817 21:47:27.325777  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m03","uid":"69a9ea8e-a783-4d7a-8e4b-dad6890b2c1d","resourceVersion":"1203","creationTimestamp":"2023-08-17T21:47:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:47:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:47:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0817 21:47:27.326111  226555 pod_ready.go:92] pod "kube-proxy-g94gj" in "kube-system" namespace has status "Ready":"True"
	I0817 21:47:27.326126  226555 pod_ready.go:81] duration metric: took 800.789253ms waiting for pod "kube-proxy-g94gj" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:27.326136  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:27.521542  226555 request.go:628] Waited for 195.313889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:47:27.521607  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmldj
	I0817 21:47:27.521613  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:27.521624  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:27.521645  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:27.524791  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:47:27.524822  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:27.524833  226555 round_trippers.go:580]     Audit-Id: 70678567-75d3-46d9-98a8-4ee7b2bb1ae1
	I0817 21:47:27.524842  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:27.524850  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:27.524859  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:27.524871  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:27.524883  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:27 GMT
	I0817 21:47:27.525027  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zmldj","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac59040d-df0c-416f-9660-4a41f7b75789","resourceVersion":"1045","creationTimestamp":"2023-08-17T21:34:24Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"807ac87b-b108-4063-a3fa-8d5e03195245","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"807ac87b-b108-4063-a3fa-8d5e03195245\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0817 21:47:27.721871  226555 request.go:628] Waited for 196.322911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:47:27.721950  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371-m02
	I0817 21:47:27.721957  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:27.721976  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:27.721990  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:27.725550  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:47:27.725573  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:27.725581  226555 round_trippers.go:580]     Audit-Id: dffad1bf-f531-4a4d-8a0a-c3dab53b8a55
	I0817 21:47:27.725587  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:27.725592  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:27.725599  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:27.725607  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:27.725618  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:27 GMT
	I0817 21:47:27.726595  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371-m02","uid":"678576f5-add2-4ca6-91e2-ac74cb1639ff","resourceVersion":"1030","creationTimestamp":"2023-08-17T21:45:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:45:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:45:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0817 21:47:27.726930  226555 pod_ready.go:92] pod "kube-proxy-zmldj" in "kube-system" namespace has status "Ready":"True"
	I0817 21:47:27.726947  226555 pod_ready.go:81] duration metric: took 400.805034ms waiting for pod "kube-proxy-zmldj" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:27.726957  226555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:27.922191  226555 request.go:628] Waited for 195.117828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:47:27.922270  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-959371
	I0817 21:47:27.922278  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:27.922294  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:27.922306  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:27.925298  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:27.925330  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:27.925342  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:27 GMT
	I0817 21:47:27.925351  226555 round_trippers.go:580]     Audit-Id: cc55d18c-bf57-43e6-bc93-2ecfc1fd7fe3
	I0817 21:47:27.925360  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:27.925368  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:27.925377  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:27.925384  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:27.925575  226555 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-959371","namespace":"kube-system","uid":"a4d90c5b-20e4-430b-9d1e-f11c4b9edbb2","resourceVersion":"882","creationTimestamp":"2023-08-17T21:33:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.mirror":"010b5eeb8ae476ddfe7bf4d61569f753","kubernetes.io/config.seen":"2023-08-17T21:33:26.519087461Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:33:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0817 21:47:28.122393  226555 request.go:628] Waited for 196.384013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:47:28.122462  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/multinode-959371
	I0817 21:47:28.122467  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:28.122475  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:28.122481  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:28.125411  226555 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:47:28.125440  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:28.125453  226555 round_trippers.go:580]     Audit-Id: b915c260-984c-43f5-a7ac-917bd2ce0511
	I0817 21:47:28.125462  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:28.125469  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:28.125476  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:28.125483  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:28.125491  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:28 GMT
	I0817 21:47:28.125666  226555 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:33:22Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0817 21:47:28.126043  226555 pod_ready.go:92] pod "kube-scheduler-multinode-959371" in "kube-system" namespace has status "Ready":"True"
	I0817 21:47:28.126079  226555 pod_ready.go:81] duration metric: took 399.114672ms waiting for pod "kube-scheduler-multinode-959371" in "kube-system" namespace to be "Ready" ...
	I0817 21:47:28.126100  226555 pod_ready.go:38] duration metric: took 2.001227304s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:47:28.126119  226555 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:47:28.126177  226555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:47:28.141012  226555 system_svc.go:56] duration metric: took 14.877289ms WaitForService to wait for kubelet.
	I0817 21:47:28.141047  226555 kubeadm.go:581] duration metric: took 2.03915471s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:47:28.141069  226555 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:47:28.321470  226555 request.go:628] Waited for 180.305909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I0817 21:47:28.321531  226555 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I0817 21:47:28.321536  226555 round_trippers.go:469] Request Headers:
	I0817 21:47:28.321544  226555 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:47:28.321550  226555 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:47:28.325169  226555 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:47:28.325191  226555 round_trippers.go:577] Response Headers:
	I0817 21:47:28.325198  226555 round_trippers.go:580]     Audit-Id: 58a6fea9-b39f-4219-917d-42953d993d22
	I0817 21:47:28.325204  226555 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:47:28.325209  226555 round_trippers.go:580]     Content-Type: application/json
	I0817 21:47:28.325215  226555 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8966737-dcd5-4519-b9d6-919999406f3d
	I0817 21:47:28.325223  226555 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcae2621-8ffe-4813-8781-e8a5c65f30af
	I0817 21:47:28.325229  226555 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:47:28 GMT
	I0817 21:47:28.325622  226555 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1222"},"items":[{"metadata":{"name":"multinode-959371","uid":"89a04626-e80d-40e9-a429-2fb9f3a674de","resourceVersion":"903","creationTimestamp":"2023-08-17T21:33:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-959371","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-959371","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_33_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15135 chars]
	I0817 21:47:28.326245  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:47:28.326265  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:47:28.326277  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:47:28.326284  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:47:28.326289  226555 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 21:47:28.326296  226555 node_conditions.go:123] node cpu capacity is 2
	I0817 21:47:28.326301  226555 node_conditions.go:105] duration metric: took 185.220836ms to run NodePressure ...
	I0817 21:47:28.326318  226555 start.go:228] waiting for startup goroutines ...
	I0817 21:47:28.326341  226555 start.go:242] writing updated cluster config ...
	I0817 21:47:28.326664  226555 ssh_runner.go:195] Run: rm -f paused
	I0817 21:47:28.382154  226555 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 21:47:28.385842  226555 out.go:177] * Done! kubectl is now configured to use "multinode-959371" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 21:43:17 UTC, ends at Thu 2023-08-17 21:47:29 UTC. --
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.539277198Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=63809a0b-fae8-4416-b895-cd20e8fb6146 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.539504355Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:434b6f16e9c15d80ab379074d8d09284651216e4cb9115e5a6293101210cbae2,Metadata:&PodSandboxMetadata{Name:busybox-67b7f59bb-9c77m,Uid:4b9baf46-70e7-4d95-b774-9c12c6970154,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308638534932060,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,pod-template-hash: 67b7f59bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:43:50.650128800Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05d0010803098ff369a73c2bf0dc0bc9ea6847b1ead66981959c848691bcddee,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-87rlb,Uid:52da85e0-72f0-4919-8615-d1cb46b65ca4,Namespace:kube-system,Attempt:0,},
State:SANDBOX_READY,CreatedAt:1692308638514163324,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:43:50.650117985Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e8aa1192-3588-49da-be88-15a801d006fc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308631047260271,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]strin
g{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-17T21:43:50.650129934Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e5edbf01ce12d950f2cfcbe28f3fd328a1105e7519ee2fb5e4ca6ee6e569a08,Metadata:&PodSandboxMetadata{Name:kube-proxy-8gdf7,Uid:00e6f433-51d6-49bb-a927-780720361eb3,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1692308631010927897,Labels:map[string]string{controller-revision-hash: 86cc8bcbf7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-780720361eb3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:43:50.650126648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4dc6313db56ef04e3b82f21beba13c13bb54388978d345585f1dc14600c72c64,Metadata:&PodSandboxMetadata{Name:kindnet-s7l7j,Uid:6af177c8-cc30-4a86-98d8-443cef5036d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308630997016601,Labels:map[string]string{app: kindnet,controller-revision-hash: 575d9d6996,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af177c8-cc30-4a86-98d8-443cef5036d8,k8s-app: kindnet,pod-template-generati
on: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:43:50.650124405Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e4404d6c11e93e27dc95289e25cc98e908a0a6dc8c4ea4457d1eeecb1aa56ac,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-959371,Uid:1844dfd193c27ced8aa4dba039096475,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308624196867215,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.104:8443,kubernetes.io/config.hash: 1844dfd193c27ced8aa4dba039096475,kubernetes.io/config.seen: 2023-08-17T21:43:43.641778378Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:58ec3fa6996e222b1c8a93b60ff7607de4a
b4c69d541a8f78ce4f044163b9d35,Metadata:&PodSandboxMetadata{Name:etcd-multinode-959371,Uid:524855ea42058e731111bcfa912d2dbe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308624183230257,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.104:2379,kubernetes.io/config.hash: 524855ea42058e731111bcfa912d2dbe,kubernetes.io/config.seen: 2023-08-17T21:43:43.641784274Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cfee907125dc2ce944f3a3410bc6ebe4860beee988b072f98306561526e53ad6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-959371,Uid:010b5eeb8ae476ddfe7bf4d61569f753,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308624172503152,Labels:map[string]string{compo
nent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 010b5eeb8ae476ddfe7bf4d61569f753,kubernetes.io/config.seen: 2023-08-17T21:43:43.641783524Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44c03ebb5a6b2e25e963f87b2ff812d2d7f6cb436cc540eb37494567c5f06efc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-959371,Uid:8e691503605658781b8470b3d4d7c7b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308624125740276,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,tier: control-plane,},Annotations:map[string]string{kubernet
es.io/config.hash: 8e691503605658781b8470b3d4d7c7b0,kubernetes.io/config.seen: 2023-08-17T21:43:43.641782547Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=63809a0b-fae8-4416-b895-cd20e8fb6146 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.540174718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6d07dabc-06b6-486a-8949-fc46411640c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.540231439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6d07dabc-06b6-486a-8949-fc46411640c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.540463059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b33cfdf2517b73c0de819619b902830adffc075307d9a2985129808da3a80f,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308662912069655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba3c38c6e61d167f703c4e603ce13d381185892b1db0c0f234fb2f2d66511c7,PodSandboxId:434b6f16e9c15d80ab379074d8d09284651216e4cb9115e5a6293101210cbae2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308640269371117,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e9ad52fded60939fb3bd72a8455d6d5929b159d729c0fa82892e2e8d6bb67d,PodSandboxId:05d0010803098ff369a73c2bf0dc0bc9ea6847b1ead66981959c848691bcddee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308639198833382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6399146068bdbf0eabfb34875d2f71caad4c0f64d4b04c7899b3af87822821,PodSandboxId:4dc6313db56ef04e3b82f21beba13c13bb54388978d345585f1dc14600c72c64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308634149215599,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18337fa13eb97ebcc6406aaca5bdb9a98ff651b5865d4695ae28b472c4c83b8,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692308632411026883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3054966d3f7c9956204f98246b81cead07d2910c8c1c9fdceff1d4c07bb4e816,PodSandboxId:3e5edbf01ce12d950f2cfcbe28f3fd328a1105e7519ee2fb5e4ca6ee6e569a08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308632031725197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-780720361
eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e50526cebd57346f1445596a0dcbf4cb43d0fda76cf7a938f70344b6176f2d7,PodSandboxId:cfee907125dc2ce944f3a3410bc6ebe4860beee988b072f98306561526e53ad6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692308625300031340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annota
tions:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127edff255023b649d04b3fb5432df7392ceff6ff7c779465817d9ddcfe1c4d8,PodSandboxId:58ec3fa6996e222b1c8a93b60ff7607de4ab4c69d541a8f78ce4f044163b9d35,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692308624972518261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.container.hash
: 69f53455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee08c17822784f0d3fe1fe466f26c488640018ae24b098d33353c576e07349c,PodSandboxId:9e4404d6c11e93e27dc95289e25cc98e908a0a6dc8c4ea4457d1eeecb1aa56ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692308624869198043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.container.hash: 2ea5cdad,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77107b6d636dd7d28d227b8097e86613a7a82753d840c9fe7ec9a8613f4396e8,PodSandboxId:44c03ebb5a6b2e25e963f87b2ff812d2d7f6cb436cc540eb37494567c5f06efc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692308624573493824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e797b7a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6d07dabc-06b6-486a-8949-fc46411640c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.549010716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8768f9d0-650f-435f-947f-250f4811c77e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.549099617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8768f9d0-650f-435f-947f-250f4811c77e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.549357678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b33cfdf2517b73c0de819619b902830adffc075307d9a2985129808da3a80f,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308662912069655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba3c38c6e61d167f703c4e603ce13d381185892b1db0c0f234fb2f2d66511c7,PodSandboxId:434b6f16e9c15d80ab379074d8d09284651216e4cb9115e5a6293101210cbae2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308640269371117,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e9ad52fded60939fb3bd72a8455d6d5929b159d729c0fa82892e2e8d6bb67d,PodSandboxId:05d0010803098ff369a73c2bf0dc0bc9ea6847b1ead66981959c848691bcddee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308639198833382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6399146068bdbf0eabfb34875d2f71caad4c0f64d4b04c7899b3af87822821,PodSandboxId:4dc6313db56ef04e3b82f21beba13c13bb54388978d345585f1dc14600c72c64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308634149215599,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18337fa13eb97ebcc6406aaca5bdb9a98ff651b5865d4695ae28b472c4c83b8,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692308632411026883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3054966d3f7c9956204f98246b81cead07d2910c8c1c9fdceff1d4c07bb4e816,PodSandboxId:3e5edbf01ce12d950f2cfcbe28f3fd328a1105e7519ee2fb5e4ca6ee6e569a08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308632031725197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-780720361
eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e50526cebd57346f1445596a0dcbf4cb43d0fda76cf7a938f70344b6176f2d7,PodSandboxId:cfee907125dc2ce944f3a3410bc6ebe4860beee988b072f98306561526e53ad6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692308625300031340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annota
tions:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127edff255023b649d04b3fb5432df7392ceff6ff7c779465817d9ddcfe1c4d8,PodSandboxId:58ec3fa6996e222b1c8a93b60ff7607de4ab4c69d541a8f78ce4f044163b9d35,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692308624972518261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.container.hash
: 69f53455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee08c17822784f0d3fe1fe466f26c488640018ae24b098d33353c576e07349c,PodSandboxId:9e4404d6c11e93e27dc95289e25cc98e908a0a6dc8c4ea4457d1eeecb1aa56ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692308624869198043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.container.hash: 2ea5cdad,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77107b6d636dd7d28d227b8097e86613a7a82753d840c9fe7ec9a8613f4396e8,PodSandboxId:44c03ebb5a6b2e25e963f87b2ff812d2d7f6cb436cc540eb37494567c5f06efc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692308624573493824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e797b7a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8768f9d0-650f-435f-947f-250f4811c77e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.590191608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=86ebbd8b-b9e7-48bf-923a-631db15093a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.590280101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=86ebbd8b-b9e7-48bf-923a-631db15093a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.590479734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b33cfdf2517b73c0de819619b902830adffc075307d9a2985129808da3a80f,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308662912069655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba3c38c6e61d167f703c4e603ce13d381185892b1db0c0f234fb2f2d66511c7,PodSandboxId:434b6f16e9c15d80ab379074d8d09284651216e4cb9115e5a6293101210cbae2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308640269371117,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e9ad52fded60939fb3bd72a8455d6d5929b159d729c0fa82892e2e8d6bb67d,PodSandboxId:05d0010803098ff369a73c2bf0dc0bc9ea6847b1ead66981959c848691bcddee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308639198833382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6399146068bdbf0eabfb34875d2f71caad4c0f64d4b04c7899b3af87822821,PodSandboxId:4dc6313db56ef04e3b82f21beba13c13bb54388978d345585f1dc14600c72c64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308634149215599,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18337fa13eb97ebcc6406aaca5bdb9a98ff651b5865d4695ae28b472c4c83b8,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692308632411026883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3054966d3f7c9956204f98246b81cead07d2910c8c1c9fdceff1d4c07bb4e816,PodSandboxId:3e5edbf01ce12d950f2cfcbe28f3fd328a1105e7519ee2fb5e4ca6ee6e569a08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308632031725197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-780720361
eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e50526cebd57346f1445596a0dcbf4cb43d0fda76cf7a938f70344b6176f2d7,PodSandboxId:cfee907125dc2ce944f3a3410bc6ebe4860beee988b072f98306561526e53ad6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692308625300031340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annota
tions:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127edff255023b649d04b3fb5432df7392ceff6ff7c779465817d9ddcfe1c4d8,PodSandboxId:58ec3fa6996e222b1c8a93b60ff7607de4ab4c69d541a8f78ce4f044163b9d35,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692308624972518261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.container.hash
: 69f53455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee08c17822784f0d3fe1fe466f26c488640018ae24b098d33353c576e07349c,PodSandboxId:9e4404d6c11e93e27dc95289e25cc98e908a0a6dc8c4ea4457d1eeecb1aa56ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692308624869198043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.container.hash: 2ea5cdad,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77107b6d636dd7d28d227b8097e86613a7a82753d840c9fe7ec9a8613f4396e8,PodSandboxId:44c03ebb5a6b2e25e963f87b2ff812d2d7f6cb436cc540eb37494567c5f06efc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692308624573493824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e797b7a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=86ebbd8b-b9e7-48bf-923a-631db15093a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.630454125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4d8169f2-d977-4486-94f9-e983f479995e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.630549347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4d8169f2-d977-4486-94f9-e983f479995e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.630880369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b33cfdf2517b73c0de819619b902830adffc075307d9a2985129808da3a80f,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308662912069655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba3c38c6e61d167f703c4e603ce13d381185892b1db0c0f234fb2f2d66511c7,PodSandboxId:434b6f16e9c15d80ab379074d8d09284651216e4cb9115e5a6293101210cbae2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308640269371117,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e9ad52fded60939fb3bd72a8455d6d5929b159d729c0fa82892e2e8d6bb67d,PodSandboxId:05d0010803098ff369a73c2bf0dc0bc9ea6847b1ead66981959c848691bcddee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308639198833382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6399146068bdbf0eabfb34875d2f71caad4c0f64d4b04c7899b3af87822821,PodSandboxId:4dc6313db56ef04e3b82f21beba13c13bb54388978d345585f1dc14600c72c64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308634149215599,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18337fa13eb97ebcc6406aaca5bdb9a98ff651b5865d4695ae28b472c4c83b8,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692308632411026883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3054966d3f7c9956204f98246b81cead07d2910c8c1c9fdceff1d4c07bb4e816,PodSandboxId:3e5edbf01ce12d950f2cfcbe28f3fd328a1105e7519ee2fb5e4ca6ee6e569a08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308632031725197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-780720361
eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e50526cebd57346f1445596a0dcbf4cb43d0fda76cf7a938f70344b6176f2d7,PodSandboxId:cfee907125dc2ce944f3a3410bc6ebe4860beee988b072f98306561526e53ad6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692308625300031340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annota
tions:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127edff255023b649d04b3fb5432df7392ceff6ff7c779465817d9ddcfe1c4d8,PodSandboxId:58ec3fa6996e222b1c8a93b60ff7607de4ab4c69d541a8f78ce4f044163b9d35,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692308624972518261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.container.hash
: 69f53455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee08c17822784f0d3fe1fe466f26c488640018ae24b098d33353c576e07349c,PodSandboxId:9e4404d6c11e93e27dc95289e25cc98e908a0a6dc8c4ea4457d1eeecb1aa56ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692308624869198043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.container.hash: 2ea5cdad,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77107b6d636dd7d28d227b8097e86613a7a82753d840c9fe7ec9a8613f4396e8,PodSandboxId:44c03ebb5a6b2e25e963f87b2ff812d2d7f6cb436cc540eb37494567c5f06efc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692308624573493824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e797b7a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4d8169f2-d977-4486-94f9-e983f479995e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.675345253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1bc52dea-9359-49ff-bcb5-50a7703f33d6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.675410346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1bc52dea-9359-49ff-bcb5-50a7703f33d6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.675791516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b33cfdf2517b73c0de819619b902830adffc075307d9a2985129808da3a80f,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308662912069655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba3c38c6e61d167f703c4e603ce13d381185892b1db0c0f234fb2f2d66511c7,PodSandboxId:434b6f16e9c15d80ab379074d8d09284651216e4cb9115e5a6293101210cbae2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308640269371117,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e9ad52fded60939fb3bd72a8455d6d5929b159d729c0fa82892e2e8d6bb67d,PodSandboxId:05d0010803098ff369a73c2bf0dc0bc9ea6847b1ead66981959c848691bcddee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308639198833382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6399146068bdbf0eabfb34875d2f71caad4c0f64d4b04c7899b3af87822821,PodSandboxId:4dc6313db56ef04e3b82f21beba13c13bb54388978d345585f1dc14600c72c64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308634149215599,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18337fa13eb97ebcc6406aaca5bdb9a98ff651b5865d4695ae28b472c4c83b8,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692308632411026883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3054966d3f7c9956204f98246b81cead07d2910c8c1c9fdceff1d4c07bb4e816,PodSandboxId:3e5edbf01ce12d950f2cfcbe28f3fd328a1105e7519ee2fb5e4ca6ee6e569a08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308632031725197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-780720361
eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e50526cebd57346f1445596a0dcbf4cb43d0fda76cf7a938f70344b6176f2d7,PodSandboxId:cfee907125dc2ce944f3a3410bc6ebe4860beee988b072f98306561526e53ad6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692308625300031340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annota
tions:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127edff255023b649d04b3fb5432df7392ceff6ff7c779465817d9ddcfe1c4d8,PodSandboxId:58ec3fa6996e222b1c8a93b60ff7607de4ab4c69d541a8f78ce4f044163b9d35,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692308624972518261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.container.hash
: 69f53455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee08c17822784f0d3fe1fe466f26c488640018ae24b098d33353c576e07349c,PodSandboxId:9e4404d6c11e93e27dc95289e25cc98e908a0a6dc8c4ea4457d1eeecb1aa56ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692308624869198043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.container.hash: 2ea5cdad,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77107b6d636dd7d28d227b8097e86613a7a82753d840c9fe7ec9a8613f4396e8,PodSandboxId:44c03ebb5a6b2e25e963f87b2ff812d2d7f6cb436cc540eb37494567c5f06efc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692308624573493824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e797b7a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1bc52dea-9359-49ff-bcb5-50a7703f33d6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.685003238Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0de0bda6-041d-4fbe-8644-318cd2f749ba name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.685219482Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:434b6f16e9c15d80ab379074d8d09284651216e4cb9115e5a6293101210cbae2,Metadata:&PodSandboxMetadata{Name:busybox-67b7f59bb-9c77m,Uid:4b9baf46-70e7-4d95-b774-9c12c6970154,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308638534932060,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,pod-template-hash: 67b7f59bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:43:50.650128800Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05d0010803098ff369a73c2bf0dc0bc9ea6847b1ead66981959c848691bcddee,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-87rlb,Uid:52da85e0-72f0-4919-8615-d1cb46b65ca4,Namespace:kube-system,Attempt:0,},
State:SANDBOX_READY,CreatedAt:1692308638514163324,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:43:50.650117985Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e8aa1192-3588-49da-be88-15a801d006fc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308631047260271,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]strin
g{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-17T21:43:50.650129934Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e5edbf01ce12d950f2cfcbe28f3fd328a1105e7519ee2fb5e4ca6ee6e569a08,Metadata:&PodSandboxMetadata{Name:kube-proxy-8gdf7,Uid:00e6f433-51d6-49bb-a927-780720361eb3,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1692308631010927897,Labels:map[string]string{controller-revision-hash: 86cc8bcbf7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-780720361eb3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:43:50.650126648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4dc6313db56ef04e3b82f21beba13c13bb54388978d345585f1dc14600c72c64,Metadata:&PodSandboxMetadata{Name:kindnet-s7l7j,Uid:6af177c8-cc30-4a86-98d8-443cef5036d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308630997016601,Labels:map[string]string{app: kindnet,controller-revision-hash: 575d9d6996,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af177c8-cc30-4a86-98d8-443cef5036d8,k8s-app: kindnet,pod-template-generati
on: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T21:43:50.650124405Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e4404d6c11e93e27dc95289e25cc98e908a0a6dc8c4ea4457d1eeecb1aa56ac,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-959371,Uid:1844dfd193c27ced8aa4dba039096475,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308624196867215,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.104:8443,kubernetes.io/config.hash: 1844dfd193c27ced8aa4dba039096475,kubernetes.io/config.seen: 2023-08-17T21:43:43.641778378Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:58ec3fa6996e222b1c8a93b60ff7607de4a
b4c69d541a8f78ce4f044163b9d35,Metadata:&PodSandboxMetadata{Name:etcd-multinode-959371,Uid:524855ea42058e731111bcfa912d2dbe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308624183230257,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.104:2379,kubernetes.io/config.hash: 524855ea42058e731111bcfa912d2dbe,kubernetes.io/config.seen: 2023-08-17T21:43:43.641784274Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cfee907125dc2ce944f3a3410bc6ebe4860beee988b072f98306561526e53ad6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-959371,Uid:010b5eeb8ae476ddfe7bf4d61569f753,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308624172503152,Labels:map[string]string{compo
nent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 010b5eeb8ae476ddfe7bf4d61569f753,kubernetes.io/config.seen: 2023-08-17T21:43:43.641783524Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44c03ebb5a6b2e25e963f87b2ff812d2d7f6cb436cc540eb37494567c5f06efc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-959371,Uid:8e691503605658781b8470b3d4d7c7b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692308624125740276,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,tier: control-plane,},Annotations:map[string]string{kubernet
es.io/config.hash: 8e691503605658781b8470b3d4d7c7b0,kubernetes.io/config.seen: 2023-08-17T21:43:43.641782547Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=0de0bda6-041d-4fbe-8644-318cd2f749ba name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.686070311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=806dc386-4866-4e3f-bc7e-39fc45bae362 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.686122434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=806dc386-4866-4e3f-bc7e-39fc45bae362 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.686344144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b33cfdf2517b73c0de819619b902830adffc075307d9a2985129808da3a80f,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308662912069655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba3c38c6e61d167f703c4e603ce13d381185892b1db0c0f234fb2f2d66511c7,PodSandboxId:434b6f16e9c15d80ab379074d8d09284651216e4cb9115e5a6293101210cbae2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308640269371117,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e9ad52fded60939fb3bd72a8455d6d5929b159d729c0fa82892e2e8d6bb67d,PodSandboxId:05d0010803098ff369a73c2bf0dc0bc9ea6847b1ead66981959c848691bcddee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308639198833382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6399146068bdbf0eabfb34875d2f71caad4c0f64d4b04c7899b3af87822821,PodSandboxId:4dc6313db56ef04e3b82f21beba13c13bb54388978d345585f1dc14600c72c64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308634149215599,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3054966d3f7c9956204f98246b81cead07d2910c8c1c9fdceff1d4c07bb4e816,PodSandboxId:3e5edbf01ce12d950f2cfcbe28f3fd328a1105e7519ee2fb5e4ca6ee6e569a08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308632031725197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-7807203
61eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e50526cebd57346f1445596a0dcbf4cb43d0fda76cf7a938f70344b6176f2d7,PodSandboxId:cfee907125dc2ce944f3a3410bc6ebe4860beee988b072f98306561526e53ad6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692308625300031340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Anno
tations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127edff255023b649d04b3fb5432df7392ceff6ff7c779465817d9ddcfe1c4d8,PodSandboxId:58ec3fa6996e222b1c8a93b60ff7607de4ab4c69d541a8f78ce4f044163b9d35,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692308624972518261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 69f53455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee08c17822784f0d3fe1fe466f26c488640018ae24b098d33353c576e07349c,PodSandboxId:9e4404d6c11e93e27dc95289e25cc98e908a0a6dc8c4ea4457d1eeecb1aa56ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692308624869198043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.container.hash: 2ea5cdad
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77107b6d636dd7d28d227b8097e86613a7a82753d840c9fe7ec9a8613f4396e8,PodSandboxId:44c03ebb5a6b2e25e963f87b2ff812d2d7f6cb436cc540eb37494567c5f06efc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692308624573493824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io.kubernetes.
container.hash: e797b7a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=806dc386-4866-4e3f-bc7e-39fc45bae362 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.712283015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6ffc970e-fd90-4d0f-b99e-6b5d4ed1d3d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.712349205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6ffc970e-fd90-4d0f-b99e-6b5d4ed1d3d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 21:47:29 multinode-959371 crio[715]: time="2023-08-17 21:47:29.712542917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b33cfdf2517b73c0de819619b902830adffc075307d9a2985129808da3a80f,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692308662912069655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba3c38c6e61d167f703c4e603ce13d381185892b1db0c0f234fb2f2d66511c7,PodSandboxId:434b6f16e9c15d80ab379074d8d09284651216e4cb9115e5a6293101210cbae2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1692308640269371117,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-9c77m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4b9baf46-70e7-4d95-b774-9c12c6970154,},Annotations:map[string]string{io.kubernetes.container.hash: 7493206b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e9ad52fded60939fb3bd72a8455d6d5929b159d729c0fa82892e2e8d6bb67d,PodSandboxId:05d0010803098ff369a73c2bf0dc0bc9ea6847b1ead66981959c848691bcddee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692308639198833382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-87rlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52da85e0-72f0-4919-8615-d1cb46b65ca4,},Annotations:map[string]string{io.kubernetes.container.hash: cce5ea5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6399146068bdbf0eabfb34875d2f71caad4c0f64d4b04c7899b3af87822821,PodSandboxId:4dc6313db56ef04e3b82f21beba13c13bb54388978d345585f1dc14600c72c64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1692308634149215599,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s7l7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6af177c8-cc30-4a86-98d8-443cef5036d8,},Annotations:map[string]string{io.kubernetes.container.hash: 75a7742f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18337fa13eb97ebcc6406aaca5bdb9a98ff651b5865d4695ae28b472c4c83b8,PodSandboxId:15ed2559b5dc15934239bd947de5ca6051ab78548959132ff7050d822325e507,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1692308632411026883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e8aa1192-3588-49da-be88-15a801d006fc,},Annotations:map[string]string{io.kubernetes.container.hash: b57e0b97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3054966d3f7c9956204f98246b81cead07d2910c8c1c9fdceff1d4c07bb4e816,PodSandboxId:3e5edbf01ce12d950f2cfcbe28f3fd328a1105e7519ee2fb5e4ca6ee6e569a08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692308632031725197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8gdf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e6f433-51d6-49bb-a927-780720361
eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 71881fca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e50526cebd57346f1445596a0dcbf4cb43d0fda76cf7a938f70344b6176f2d7,PodSandboxId:cfee907125dc2ce944f3a3410bc6ebe4860beee988b072f98306561526e53ad6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692308625300031340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010b5eeb8ae476ddfe7bf4d61569f753,},Annota
tions:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127edff255023b649d04b3fb5432df7392ceff6ff7c779465817d9ddcfe1c4d8,PodSandboxId:58ec3fa6996e222b1c8a93b60ff7607de4ab4c69d541a8f78ce4f044163b9d35,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692308624972518261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 524855ea42058e731111bcfa912d2dbe,},Annotations:map[string]string{io.kubernetes.container.hash
: 69f53455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee08c17822784f0d3fe1fe466f26c488640018ae24b098d33353c576e07349c,PodSandboxId:9e4404d6c11e93e27dc95289e25cc98e908a0a6dc8c4ea4457d1eeecb1aa56ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692308624869198043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844dfd193c27ced8aa4dba039096475,},Annotations:map[string]string{io.kubernetes.container.hash: 2ea5cdad,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77107b6d636dd7d28d227b8097e86613a7a82753d840c9fe7ec9a8613f4396e8,PodSandboxId:44c03ebb5a6b2e25e963f87b2ff812d2d7f6cb436cc540eb37494567c5f06efc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692308624573493824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959371,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e691503605658781b8470b3d4d7c7b0,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e797b7a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6ffc970e-fd90-4d0f-b99e-6b5d4ed1d3d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	b3b33cfdf2517       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   15ed2559b5dc1
	eba3c38c6e61d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   434b6f16e9c15
	b2e9ad52fded6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   05d0010803098
	4b6399146068b       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      3 minutes ago       Running             kindnet-cni               1                   4dc6313db56ef
	d18337fa13eb9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   15ed2559b5dc1
	3054966d3f7c9       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4                                      3 minutes ago       Running             kube-proxy                1                   3e5edbf01ce12
	8e50526cebd57       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16                                      3 minutes ago       Running             kube-scheduler            1                   cfee907125dc2
	127edff255023       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      3 minutes ago       Running             etcd                      1                   58ec3fa6996e2
	7ee08c1782278       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c                                      3 minutes ago       Running             kube-apiserver            1                   9e4404d6c11e9
	77107b6d636dd       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5                                      3 minutes ago       Running             kube-controller-manager   1                   44c03ebb5a6b2
	
	* 
	* ==> coredns [b2e9ad52fded60939fb3bd72a8455d6d5929b159d729c0fa82892e2e8d6bb67d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42915 - 19 "HINFO IN 8880250770346854231.1207414849980614023. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011956762s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-959371
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959371
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=multinode-959371
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_33_27_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:33:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-959371
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:47:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:44:20 +0000   Thu, 17 Aug 2023 21:33:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:44:20 +0000   Thu, 17 Aug 2023 21:33:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:44:20 +0000   Thu, 17 Aug 2023 21:33:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:44:20 +0000   Thu, 17 Aug 2023 21:43:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    multinode-959371
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcfbba5eccdf437581b014b838d975be
	  System UUID:                dcfbba5e-ccdf-4375-81b0-14b838d975be
	  Boot ID:                    d5ae11fd-580a-4bad-a7be-8b09f042d280
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-9c77m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5d78c9869d-87rlb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-959371                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-s7l7j                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-959371             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-959371    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8gdf7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-959371             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m36s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-959371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-959371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-959371 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-959371 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-959371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-959371 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-959371 event: Registered Node multinode-959371 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-959371 status is now: NodeReady
	  Normal  Starting                 3m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m46s (x8 over 3m46s)  kubelet          Node multinode-959371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s (x8 over 3m46s)  kubelet          Node multinode-959371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s (x7 over 3m46s)  kubelet          Node multinode-959371 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m27s                  node-controller  Node multinode-959371 event: Registered Node multinode-959371 in Controller
	
	
	Name:               multinode-959371-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959371-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:45:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-959371-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:47:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:45:44 +0000   Thu, 17 Aug 2023 21:45:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:45:44 +0000   Thu, 17 Aug 2023 21:45:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:45:44 +0000   Thu, 17 Aug 2023 21:45:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:45:44 +0000   Thu, 17 Aug 2023 21:45:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    multinode-959371-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a378129c4d6a4a71bb2566f0c8e30009
	  System UUID:                a378129c-4d6a-4a71-bb25-66f0c8e30009
	  Boot ID:                    089a50e0-aeb8-4119-be98-e1239ba242e0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-bbwnr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-xjn26              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-zmldj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 13m                  kube-proxy  
	  Normal   Starting                 103s                 kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)    kubelet     Node multinode-959371-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)    kubelet     Node multinode-959371-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)    kubelet     Node multinode-959371-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                  kubelet     Node multinode-959371-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m49s                kubelet     Node multinode-959371-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m8s (x2 over 3m8s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       108s                 kubelet     Node multinode-959371-m02 status is now: NodeNotSchedulable
	  Normal   Starting                 106s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)  kubelet     Node multinode-959371-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)  kubelet     Node multinode-959371-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)  kubelet     Node multinode-959371-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                 kubelet     Node multinode-959371-m02 status is now: NodeReady
	
	
	Name:               multinode-959371-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959371-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:47:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-959371-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:47:25 +0000   Thu, 17 Aug 2023 21:47:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:47:25 +0000   Thu, 17 Aug 2023 21:47:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:47:25 +0000   Thu, 17 Aug 2023 21:47:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:47:25 +0000   Thu, 17 Aug 2023 21:47:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    multinode-959371-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 651610c184094b4282bd217ee1c2820e
	  System UUID:                651610c1-8409-4b42-82bd-217ee1c2820e
	  Boot ID:                    0b9d880e-e95f-46de-b29b-4cde6483703c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-xx7f2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-cmxkw              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-g94gj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-959371-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-959371-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-959371-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-959371-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-959371-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-959371-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-959371-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-959371-m03 status is now: NodeReady
	  Normal   NodeNotReady             68s                kubelet     Node multinode-959371-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        35s (x2 over 95s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       7s                 kubelet     Node multinode-959371-m03 status is now: NodeNotSchedulable
	  Normal   Starting                 6s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-959371-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-959371-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-959371-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-959371-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug17 21:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074995] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.355799] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.344994] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148643] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.631388] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.842054] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.104559] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.147876] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.107897] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.204239] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +17.023368] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [127edff255023b649d04b3fb5432df7392ceff6ff7c779465817d9ddcfe1c4d8] <==
	* {"level":"info","ts":"2023-08-17T21:43:46.736Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T21:43:46.736Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T21:43:46.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd switched to configuration voters=(2465202773188110525)"}
	{"level":"info","ts":"2023-08-17T21:43:46.741Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bcba49d8b8764a98","local-member-id":"223628dc6b2f68bd","added-peer-id":"223628dc6b2f68bd","added-peer-peer-urls":["https://192.168.39.104:2380"]}
	{"level":"info","ts":"2023-08-17T21:43:46.742Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bcba49d8b8764a98","local-member-id":"223628dc6b2f68bd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:43:46.742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:43:46.743Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-17T21:43:46.743Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"223628dc6b2f68bd","initial-advertise-peer-urls":["https://192.168.39.104:2380"],"listen-peer-urls":["https://192.168.39.104:2380"],"advertise-client-urls":["https://192.168.39.104:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.104:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-17T21:43:46.743Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-17T21:43:46.743Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.104:2380"}
	{"level":"info","ts":"2023-08-17T21:43:46.744Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.104:2380"}
	{"level":"info","ts":"2023-08-17T21:43:48.069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-17T21:43:48.069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-17T21:43:48.069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgPreVoteResp from 223628dc6b2f68bd at term 2"}
	{"level":"info","ts":"2023-08-17T21:43:48.069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became candidate at term 3"}
	{"level":"info","ts":"2023-08-17T21:43:48.069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgVoteResp from 223628dc6b2f68bd at term 3"}
	{"level":"info","ts":"2023-08-17T21:43:48.069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became leader at term 3"}
	{"level":"info","ts":"2023-08-17T21:43:48.069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 223628dc6b2f68bd elected leader 223628dc6b2f68bd at term 3"}
	{"level":"info","ts":"2023-08-17T21:43:48.072Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:43:48.072Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"223628dc6b2f68bd","local-member-attributes":"{Name:multinode-959371 ClientURLs:[https://192.168.39.104:2379]}","request-path":"/0/members/223628dc6b2f68bd/attributes","cluster-id":"bcba49d8b8764a98","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T21:43:48.073Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:43:48.074Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.104:2379"}
	{"level":"info","ts":"2023-08-17T21:43:48.074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T21:43:48.077Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-17T21:43:48.082Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:47:30 up 4 min,  0 users,  load average: 0.45, 0.40, 0.19
	Linux multinode-959371 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [4b6399146068bdbf0eabfb34875d2f71caad4c0f64d4b04c7899b3af87822821] <==
	* I0817 21:46:55.559781       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:46:55.559836       1 main.go:227] handling current node
	I0817 21:46:55.559847       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0817 21:46:55.559853       1 main.go:250] Node multinode-959371-m02 has CIDR [10.244.1.0/24] 
	I0817 21:46:55.560073       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0817 21:46:55.560115       1 main.go:250] Node multinode-959371-m03 has CIDR [10.244.3.0/24] 
	I0817 21:47:05.574891       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:47:05.575096       1 main.go:227] handling current node
	I0817 21:47:05.575145       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0817 21:47:05.575167       1 main.go:250] Node multinode-959371-m02 has CIDR [10.244.1.0/24] 
	I0817 21:47:05.575279       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0817 21:47:05.575299       1 main.go:250] Node multinode-959371-m03 has CIDR [10.244.3.0/24] 
	I0817 21:47:15.581454       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:47:15.581513       1 main.go:227] handling current node
	I0817 21:47:15.581525       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0817 21:47:15.581532       1 main.go:250] Node multinode-959371-m02 has CIDR [10.244.1.0/24] 
	I0817 21:47:15.581763       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0817 21:47:15.581800       1 main.go:250] Node multinode-959371-m03 has CIDR [10.244.3.0/24] 
	I0817 21:47:25.588219       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0817 21:47:25.588532       1 main.go:227] handling current node
	I0817 21:47:25.588782       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0817 21:47:25.588898       1 main.go:250] Node multinode-959371-m02 has CIDR [10.244.1.0/24] 
	I0817 21:47:25.589157       1 main.go:223] Handling node with IPs: map[192.168.39.227:{}]
	I0817 21:47:25.589213       1 main.go:250] Node multinode-959371-m03 has CIDR [10.244.2.0/24] 
	I0817 21:47:25.589338       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.227 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [7ee08c17822784f0d3fe1fe466f26c488640018ae24b098d33353c576e07349c] <==
	* I0817 21:43:49.814180       1 establishing_controller.go:76] Starting EstablishingController
	I0817 21:43:49.814226       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0817 21:43:49.814259       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0817 21:43:49.814291       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0817 21:43:49.893187       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 21:43:49.939375       1 shared_informer.go:318] Caches are synced for configmaps
	I0817 21:43:49.939523       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0817 21:43:49.939565       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0817 21:43:49.939709       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0817 21:43:49.942718       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 21:43:49.947846       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0817 21:43:49.948750       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 21:43:49.949029       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0817 21:43:49.949078       1 aggregator.go:152] initial CRD sync complete...
	I0817 21:43:49.949086       1 autoregister_controller.go:141] Starting autoregister controller
	I0817 21:43:49.949090       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0817 21:43:49.949095       1 cache.go:39] Caches are synced for autoregister controller
	I0817 21:43:50.445174       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 21:43:50.742906       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0817 21:43:52.837894       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0817 21:43:53.020365       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0817 21:43:53.039849       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0817 21:43:53.142247       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 21:43:53.151893       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 21:44:40.423751       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [77107b6d636dd7d28d227b8097e86613a7a82753d840c9fe7ec9a8613f4396e8] <==
	* I0817 21:44:02.503915       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0817 21:44:02.508709       1 shared_informer.go:318] Caches are synced for persistent volume
	I0817 21:44:02.515215       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-959371"
	I0817 21:44:02.515306       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-959371-m02"
	I0817 21:44:02.515329       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-959371-m03"
	I0817 21:44:02.516679       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0817 21:44:02.880427       1 shared_informer.go:318] Caches are synced for garbage collector
	I0817 21:44:02.915528       1 shared_informer.go:318] Caches are synced for garbage collector
	I0817 21:44:02.915683       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	W0817 21:44:41.433763       1 topologycache.go:232] Can't get CPU or zone information for multinode-959371-m03 node
	I0817 21:45:40.494713       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-xx7f2"
	W0817 21:45:43.498872       1 topologycache.go:232] Can't get CPU or zone information for multinode-959371-m03 node
	W0817 21:45:44.166526       1 topologycache.go:232] Can't get CPU or zone information for multinode-959371-m03 node
	I0817 21:45:44.166554       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959371-m02\" does not exist"
	I0817 21:45:44.168146       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-65x2b" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-65x2b"
	I0817 21:45:44.182562       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959371-m02" podCIDRs=[10.244.1.0/24]
	W0817 21:45:44.303764       1 topologycache.go:232] Can't get CPU or zone information for multinode-959371-m02 node
	W0817 21:46:22.487988       1 topologycache.go:232] Can't get CPU or zone information for multinode-959371-m02 node
	I0817 21:47:21.333678       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-bbwnr"
	W0817 21:47:24.337443       1 topologycache.go:232] Can't get CPU or zone information for multinode-959371-m02 node
	W0817 21:47:25.021723       1 topologycache.go:232] Can't get CPU or zone information for multinode-959371-m02 node
	I0817 21:47:25.023467       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-xx7f2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-xx7f2"
	I0817 21:47:25.024035       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959371-m03\" does not exist"
	I0817 21:47:25.055920       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959371-m03" podCIDRs=[10.244.2.0/24]
	W0817 21:47:25.165912       1 topologycache.go:232] Can't get CPU or zone information for multinode-959371-m02 node
	
	* 
	* ==> kube-proxy [3054966d3f7c9956204f98246b81cead07d2910c8c1c9fdceff1d4c07bb4e816] <==
	* I0817 21:43:52.921878       1 node.go:141] Successfully retrieved node IP: 192.168.39.104
	I0817 21:43:52.923189       1 server_others.go:110] "Detected node IP" address="192.168.39.104"
	I0817 21:43:52.923277       1 server_others.go:554] "Using iptables proxy"
	I0817 21:43:53.045340       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0817 21:43:53.045423       1 server_others.go:192] "Using iptables Proxier"
	I0817 21:43:53.046168       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 21:43:53.046966       1 server.go:658] "Version info" version="v1.27.4"
	I0817 21:43:53.047047       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 21:43:53.049869       1 config.go:188] "Starting service config controller"
	I0817 21:43:53.050119       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 21:43:53.050389       1 config.go:97] "Starting endpoint slice config controller"
	I0817 21:43:53.050421       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 21:43:53.051393       1 config.go:315] "Starting node config controller"
	I0817 21:43:53.051431       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 21:43:53.153033       1 shared_informer.go:318] Caches are synced for service config
	I0817 21:43:53.153207       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 21:43:53.153486       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8e50526cebd57346f1445596a0dcbf4cb43d0fda76cf7a938f70344b6176f2d7] <==
	* I0817 21:43:48.023165       1 serving.go:348] Generated self-signed cert in-memory
	W0817 21:43:49.851162       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 21:43:49.851332       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 21:43:49.851442       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 21:43:49.851470       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 21:43:49.898479       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0817 21:43:49.898701       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 21:43:49.906380       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 21:43:49.906509       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 21:43:49.907916       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0817 21:43:49.908355       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0817 21:43:50.007783       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 21:43:17 UTC, ends at Thu 2023-08-17 21:47:30 UTC. --
	Aug 17 21:43:52 multinode-959371 kubelet[921]: E0817 21:43:52.422161     921 projected.go:198] Error preparing data for projected volume kube-api-access-r66m7 for pod default/busybox-67b7f59bb-9c77m: object "default"/"kube-root-ca.crt" not registered
	Aug 17 21:43:52 multinode-959371 kubelet[921]: E0817 21:43:52.422204     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b9baf46-70e7-4d95-b774-9c12c6970154-kube-api-access-r66m7 podName:4b9baf46-70e7-4d95-b774-9c12c6970154 nodeName:}" failed. No retries permitted until 2023-08-17 21:43:54.4221912 +0000 UTC m=+11.005408063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r66m7" (UniqueName: "kubernetes.io/projected/4b9baf46-70e7-4d95-b774-9c12c6970154-kube-api-access-r66m7") pod "busybox-67b7f59bb-9c77m" (UID: "4b9baf46-70e7-4d95-b774-9c12c6970154") : object "default"/"kube-root-ca.crt" not registered
	Aug 17 21:43:52 multinode-959371 kubelet[921]: E0817 21:43:52.683398     921 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-67b7f59bb-9c77m" podUID=4b9baf46-70e7-4d95-b774-9c12c6970154
	Aug 17 21:43:52 multinode-959371 kubelet[921]: E0817 21:43:52.683952     921 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5d78c9869d-87rlb" podUID=52da85e0-72f0-4919-8615-d1cb46b65ca4
	Aug 17 21:43:54 multinode-959371 kubelet[921]: E0817 21:43:54.339220     921 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 17 21:43:54 multinode-959371 kubelet[921]: E0817 21:43:54.339285     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52da85e0-72f0-4919-8615-d1cb46b65ca4-config-volume podName:52da85e0-72f0-4919-8615-d1cb46b65ca4 nodeName:}" failed. No retries permitted until 2023-08-17 21:43:58.339270403 +0000 UTC m=+14.922487254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/52da85e0-72f0-4919-8615-d1cb46b65ca4-config-volume") pod "coredns-5d78c9869d-87rlb" (UID: "52da85e0-72f0-4919-8615-d1cb46b65ca4") : object "kube-system"/"coredns" not registered
	Aug 17 21:43:54 multinode-959371 kubelet[921]: E0817 21:43:54.440337     921 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Aug 17 21:43:54 multinode-959371 kubelet[921]: E0817 21:43:54.440430     921 projected.go:198] Error preparing data for projected volume kube-api-access-r66m7 for pod default/busybox-67b7f59bb-9c77m: object "default"/"kube-root-ca.crt" not registered
	Aug 17 21:43:54 multinode-959371 kubelet[921]: E0817 21:43:54.440481     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b9baf46-70e7-4d95-b774-9c12c6970154-kube-api-access-r66m7 podName:4b9baf46-70e7-4d95-b774-9c12c6970154 nodeName:}" failed. No retries permitted until 2023-08-17 21:43:58.440467786 +0000 UTC m=+15.023684638 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r66m7" (UniqueName: "kubernetes.io/projected/4b9baf46-70e7-4d95-b774-9c12c6970154-kube-api-access-r66m7") pod "busybox-67b7f59bb-9c77m" (UID: "4b9baf46-70e7-4d95-b774-9c12c6970154") : object "default"/"kube-root-ca.crt" not registered
	Aug 17 21:43:54 multinode-959371 kubelet[921]: E0817 21:43:54.683566     921 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5d78c9869d-87rlb" podUID=52da85e0-72f0-4919-8615-d1cb46b65ca4
	Aug 17 21:43:54 multinode-959371 kubelet[921]: E0817 21:43:54.683750     921 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-67b7f59bb-9c77m" podUID=4b9baf46-70e7-4d95-b774-9c12c6970154
	Aug 17 21:43:55 multinode-959371 kubelet[921]: I0817 21:43:55.573441     921 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Aug 17 21:44:22 multinode-959371 kubelet[921]: I0817 21:44:22.876011     921 scope.go:115] "RemoveContainer" containerID="d18337fa13eb97ebcc6406aaca5bdb9a98ff651b5865d4695ae28b472c4c83b8"
	Aug 17 21:44:43 multinode-959371 kubelet[921]: E0817 21:44:43.715146     921 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 21:44:43 multinode-959371 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 21:44:43 multinode-959371 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 21:44:43 multinode-959371 kubelet[921]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 21:45:43 multinode-959371 kubelet[921]: E0817 21:45:43.718246     921 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 21:45:43 multinode-959371 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 21:45:43 multinode-959371 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 21:45:43 multinode-959371 kubelet[921]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 21:46:43 multinode-959371 kubelet[921]: E0817 21:46:43.715863     921 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 21:46:43 multinode-959371 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 21:46:43 multinode-959371 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 21:46:43 multinode-959371 kubelet[921]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-959371 -n multinode-959371
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-959371 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (684.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 stop
E0817 21:48:09.344511  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-959371 stop: exit status 82 (2m1.052067026s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-959371"  ...
	* Stopping node "multinode-959371"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-959371 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-959371 status: exit status 3 (18.60545447s)

                                                
                                                
-- stdout --
	multinode-959371
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-959371-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:49:52.830422  228943 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0817 21:49:52.830476  228943 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-959371 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-959371 -n multinode-959371
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-959371 -n multinode-959371: exit status 3 (3.182497082s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:49:56.190479  229035 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0817 21:49:56.190505  229035 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-959371" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.84s)

                                                
                                    
x
+
TestPreload (192.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-269501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-269501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m38.852034165s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-269501 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-269501 image pull gcr.io/k8s-minikube/busybox: (1.127423022s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-269501
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-269501: (10.099868898s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-269501 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0817 22:00:10.600632  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 22:00:31.664596  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-269501 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.312389171s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-269501 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-08-17 22:01:20.211404756 +0000 UTC m=+3053.852172489
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-269501 -n test-preload-269501
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-269501 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-269501 logs -n 25: (1.081855158s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n multinode-959371 sudo cat                                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | /home/docker/cp-test_multinode-959371-m03_multinode-959371.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-959371 cp multinode-959371-m03:/home/docker/cp-test.txt                       | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m02:/home/docker/cp-test_multinode-959371-m03_multinode-959371-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n                                                                 | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | multinode-959371-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-959371 ssh -n multinode-959371-m02 sudo cat                                   | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	|         | /home/docker/cp-test_multinode-959371-m03_multinode-959371-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-959371 node stop m03                                                          | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:35 UTC |
	| node    | multinode-959371 node start                                                             | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:35 UTC | 17 Aug 23 21:36 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-959371                                                                | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:36 UTC |                     |
	| stop    | -p multinode-959371                                                                     | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:36 UTC |                     |
	| start   | -p multinode-959371                                                                     | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:38 UTC | 17 Aug 23 21:47 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-959371                                                                | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:47 UTC |                     |
	| node    | multinode-959371 node delete                                                            | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:47 UTC | 17 Aug 23 21:47 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-959371 stop                                                                   | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:47 UTC |                     |
	| start   | -p multinode-959371                                                                     | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:49 UTC | 17 Aug 23 21:57 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-959371                                                                | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:57 UTC |                     |
	| start   | -p multinode-959371-m02                                                                 | multinode-959371-m02 | jenkins | v1.31.2 | 17 Aug 23 21:57 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-959371-m03                                                                 | multinode-959371-m03 | jenkins | v1.31.2 | 17 Aug 23 21:57 UTC | 17 Aug 23 21:58 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-959371                                                                 | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:58 UTC |                     |
	| delete  | -p multinode-959371-m03                                                                 | multinode-959371-m03 | jenkins | v1.31.2 | 17 Aug 23 21:58 UTC | 17 Aug 23 21:58 UTC |
	| delete  | -p multinode-959371                                                                     | multinode-959371     | jenkins | v1.31.2 | 17 Aug 23 21:58 UTC | 17 Aug 23 21:58 UTC |
	| start   | -p test-preload-269501                                                                  | test-preload-269501  | jenkins | v1.31.2 | 17 Aug 23 21:58 UTC | 17 Aug 23 21:59 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-269501 image pull                                                          | test-preload-269501  | jenkins | v1.31.2 | 17 Aug 23 21:59 UTC | 17 Aug 23 21:59 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-269501                                                                  | test-preload-269501  | jenkins | v1.31.2 | 17 Aug 23 21:59 UTC | 17 Aug 23 22:00 UTC |
	| start   | -p test-preload-269501                                                                  | test-preload-269501  | jenkins | v1.31.2 | 17 Aug 23 22:00 UTC | 17 Aug 23 22:01 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-269501 image list                                                          | test-preload-269501  | jenkins | v1.31.2 | 17 Aug 23 22:01 UTC | 17 Aug 23 22:01 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 22:00:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 22:00:00.713814  231632 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:00:00.713976  231632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:00:00.713984  231632 out.go:309] Setting ErrFile to fd 2...
	I0817 22:00:00.713989  231632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:00:00.714203  231632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:00:00.714809  231632 out.go:303] Setting JSON to false
	I0817 22:00:00.715749  231632 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24126,"bootTime":1692285475,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:00:00.715818  231632 start.go:138] virtualization: kvm guest
	I0817 22:00:00.718586  231632 out.go:177] * [test-preload-269501] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:00:00.720495  231632 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:00:00.720501  231632 notify.go:220] Checking for updates...
	I0817 22:00:00.722152  231632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:00:00.723989  231632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:00:00.725628  231632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:00:00.727161  231632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:00:00.729245  231632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:00:00.731091  231632 config.go:182] Loaded profile config "test-preload-269501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0817 22:00:00.731506  231632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:00:00.731567  231632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:00:00.746010  231632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I0817 22:00:00.746526  231632 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:00:00.747174  231632 main.go:141] libmachine: Using API Version  1
	I0817 22:00:00.747197  231632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:00:00.747558  231632 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:00:00.747765  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:00:00.749797  231632 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0817 22:00:00.751413  231632 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:00:00.751738  231632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:00:00.751777  231632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:00:00.766651  231632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I0817 22:00:00.767158  231632 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:00:00.767724  231632 main.go:141] libmachine: Using API Version  1
	I0817 22:00:00.767758  231632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:00:00.768166  231632 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:00:00.768465  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:00:00.807339  231632 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:00:00.808886  231632 start.go:298] selected driver: kvm2
	I0817 22:00:00.808910  231632 start.go:902] validating driver "kvm2" against &{Name:test-preload-269501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName
:test-preload-269501 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:00:00.809055  231632 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:00:00.809782  231632 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:00:00.809871  231632 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:00:00.825750  231632 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:00:00.826196  231632 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 22:00:00.826244  231632 cni.go:84] Creating CNI manager for ""
	I0817 22:00:00.826256  231632 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:00:00.826273  231632 start_flags.go:319] config:
	{Name:test-preload-269501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-269501 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:00:00.826452  231632 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:00:00.828871  231632 out.go:177] * Starting control plane node test-preload-269501 in cluster test-preload-269501
	I0817 22:00:00.830614  231632 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0817 22:00:00.851239  231632 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0817 22:00:00.851279  231632 cache.go:57] Caching tarball of preloaded images
	I0817 22:00:00.851521  231632 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0817 22:00:00.853721  231632 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0817 22:00:00.855372  231632 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0817 22:00:00.893079  231632 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0817 22:00:07.761717  231632 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0817 22:00:07.761843  231632 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0817 22:00:08.635930  231632 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0817 22:00:08.636116  231632 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/config.json ...
	I0817 22:00:08.636405  231632 start.go:365] acquiring machines lock for test-preload-269501: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:00:08.636498  231632 start.go:369] acquired machines lock for "test-preload-269501" in 64.671µs
	I0817 22:00:08.636520  231632 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:00:08.636540  231632 fix.go:54] fixHost starting: 
	I0817 22:00:08.636972  231632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:00:08.637027  231632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:00:08.652325  231632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0817 22:00:08.652856  231632 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:00:08.653468  231632 main.go:141] libmachine: Using API Version  1
	I0817 22:00:08.653497  231632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:00:08.653872  231632 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:00:08.654114  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:00:08.654268  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetState
	I0817 22:00:08.656141  231632 fix.go:102] recreateIfNeeded on test-preload-269501: state=Stopped err=<nil>
	I0817 22:00:08.656170  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	W0817 22:00:08.656350  231632 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:00:08.658961  231632 out.go:177] * Restarting existing kvm2 VM for "test-preload-269501" ...
	I0817 22:00:08.660749  231632 main.go:141] libmachine: (test-preload-269501) Calling .Start
	I0817 22:00:08.660974  231632 main.go:141] libmachine: (test-preload-269501) Ensuring networks are active...
	I0817 22:00:08.661887  231632 main.go:141] libmachine: (test-preload-269501) Ensuring network default is active
	I0817 22:00:08.662295  231632 main.go:141] libmachine: (test-preload-269501) Ensuring network mk-test-preload-269501 is active
	I0817 22:00:08.662696  231632 main.go:141] libmachine: (test-preload-269501) Getting domain xml...
	I0817 22:00:08.663556  231632 main.go:141] libmachine: (test-preload-269501) Creating domain...
	I0817 22:00:09.891583  231632 main.go:141] libmachine: (test-preload-269501) Waiting to get IP...
	I0817 22:00:09.892476  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:09.892813  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:09.892973  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:09.892830  231677 retry.go:31] will retry after 232.954101ms: waiting for machine to come up
	I0817 22:00:10.127495  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:10.127922  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:10.127950  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:10.127854  231677 retry.go:31] will retry after 376.365606ms: waiting for machine to come up
	I0817 22:00:10.505686  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:10.506145  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:10.506179  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:10.506086  231677 retry.go:31] will retry after 453.746174ms: waiting for machine to come up
	I0817 22:00:10.962040  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:10.962641  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:10.962667  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:10.962580  231677 retry.go:31] will retry after 387.173685ms: waiting for machine to come up
	I0817 22:00:11.351204  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:11.351569  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:11.351630  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:11.351531  231677 retry.go:31] will retry after 710.494148ms: waiting for machine to come up
	I0817 22:00:12.063360  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:12.063882  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:12.063920  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:12.063762  231677 retry.go:31] will retry after 740.91749ms: waiting for machine to come up
	I0817 22:00:12.807033  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:12.807527  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:12.807559  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:12.807471  231677 retry.go:31] will retry after 1.069869923s: waiting for machine to come up
	I0817 22:00:13.879436  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:13.879884  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:13.879919  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:13.879823  231677 retry.go:31] will retry after 932.371676ms: waiting for machine to come up
	I0817 22:00:14.814138  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:14.814556  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:14.814584  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:14.814515  231677 retry.go:31] will retry after 1.610942415s: waiting for machine to come up
	I0817 22:00:16.427536  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:16.427922  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:16.427959  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:16.427809  231677 retry.go:31] will retry after 2.274332227s: waiting for machine to come up
	I0817 22:00:18.704367  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:18.704873  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:18.704900  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:18.704801  231677 retry.go:31] will retry after 1.954337207s: waiting for machine to come up
	I0817 22:00:20.662078  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:20.662598  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:20.662637  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:20.662514  231677 retry.go:31] will retry after 2.388929059s: waiting for machine to come up
	I0817 22:00:23.054113  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:23.054524  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:23.054550  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:23.054476  231677 retry.go:31] will retry after 2.733139728s: waiting for machine to come up
	I0817 22:00:25.790641  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:25.790955  231632 main.go:141] libmachine: (test-preload-269501) DBG | unable to find current IP address of domain test-preload-269501 in network mk-test-preload-269501
	I0817 22:00:25.790993  231632 main.go:141] libmachine: (test-preload-269501) DBG | I0817 22:00:25.790916  231677 retry.go:31] will retry after 3.964439217s: waiting for machine to come up
	I0817 22:00:29.757441  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:29.757942  231632 main.go:141] libmachine: (test-preload-269501) Found IP for machine: 192.168.39.183
	I0817 22:00:29.757969  231632 main.go:141] libmachine: (test-preload-269501) Reserving static IP address...
	I0817 22:00:29.757986  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has current primary IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:29.758450  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "test-preload-269501", mac: "52:54:00:1b:9a:34", ip: "192.168.39.183"} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:29.758489  231632 main.go:141] libmachine: (test-preload-269501) DBG | skip adding static IP to network mk-test-preload-269501 - found existing host DHCP lease matching {name: "test-preload-269501", mac: "52:54:00:1b:9a:34", ip: "192.168.39.183"}
	I0817 22:00:29.758498  231632 main.go:141] libmachine: (test-preload-269501) Reserved static IP address: 192.168.39.183
	I0817 22:00:29.758510  231632 main.go:141] libmachine: (test-preload-269501) DBG | Getting to WaitForSSH function...
	I0817 22:00:29.758520  231632 main.go:141] libmachine: (test-preload-269501) Waiting for SSH to be available...
	I0817 22:00:29.760328  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:29.760624  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:29.760652  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:29.760730  231632 main.go:141] libmachine: (test-preload-269501) DBG | Using SSH client type: external
	I0817 22:00:29.760771  231632 main.go:141] libmachine: (test-preload-269501) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/test-preload-269501/id_rsa (-rw-------)
	I0817 22:00:29.760814  231632 main.go:141] libmachine: (test-preload-269501) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/test-preload-269501/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:00:29.760833  231632 main.go:141] libmachine: (test-preload-269501) DBG | About to run SSH command:
	I0817 22:00:29.760847  231632 main.go:141] libmachine: (test-preload-269501) DBG | exit 0
	I0817 22:00:29.858257  231632 main.go:141] libmachine: (test-preload-269501) DBG | SSH cmd err, output: <nil>: 
	I0817 22:00:29.858628  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetConfigRaw
	I0817 22:00:29.859422  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetIP
	I0817 22:00:29.861712  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:29.862087  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:29.862123  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:29.862338  231632 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/config.json ...
	I0817 22:00:29.862595  231632 machine.go:88] provisioning docker machine ...
	I0817 22:00:29.862615  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:00:29.862825  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetMachineName
	I0817 22:00:29.862986  231632 buildroot.go:166] provisioning hostname "test-preload-269501"
	I0817 22:00:29.863009  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetMachineName
	I0817 22:00:29.863176  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:00:29.865184  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:29.865467  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:29.865504  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:29.865683  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:00:29.865842  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:29.865954  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:29.866035  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:00:29.866171  231632 main.go:141] libmachine: Using SSH client type: native
	I0817 22:00:29.866669  231632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0817 22:00:29.866683  231632 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-269501 && echo "test-preload-269501" | sudo tee /etc/hostname
	I0817 22:00:30.013188  231632 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-269501
	
	I0817 22:00:30.013230  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:00:30.015866  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.016180  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:30.016221  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.016354  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:00:30.016548  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:30.016806  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:30.016954  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:00:30.017166  231632 main.go:141] libmachine: Using SSH client type: native
	I0817 22:00:30.017572  231632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0817 22:00:30.017588  231632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-269501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-269501/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-269501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:00:30.159562  231632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:00:30.159609  231632 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:00:30.159632  231632 buildroot.go:174] setting up certificates
	I0817 22:00:30.159642  231632 provision.go:83] configureAuth start
	I0817 22:00:30.159652  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetMachineName
	I0817 22:00:30.159962  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetIP
	I0817 22:00:30.162455  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.162790  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:30.162826  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.162962  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:00:30.165002  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.165366  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:30.165400  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.165535  231632 provision.go:138] copyHostCerts
	I0817 22:00:30.165586  231632 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:00:30.165604  231632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:00:30.165668  231632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:00:30.165752  231632 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:00:30.165761  231632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:00:30.165782  231632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:00:30.165832  231632 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:00:30.165838  231632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:00:30.165858  231632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:00:30.165904  231632 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.test-preload-269501 san=[192.168.39.183 192.168.39.183 localhost 127.0.0.1 minikube test-preload-269501]
	I0817 22:00:30.306503  231632 provision.go:172] copyRemoteCerts
	I0817 22:00:30.306570  231632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:00:30.306598  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:00:30.309424  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.309761  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:30.309813  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.309992  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:00:30.310208  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:30.310365  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:00:30.310498  231632 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/test-preload-269501/id_rsa Username:docker}
	I0817 22:00:30.407312  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:00:30.431159  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0817 22:00:30.455511  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:00:30.480759  231632 provision.go:86] duration metric: configureAuth took 321.103277ms
	I0817 22:00:30.480791  231632 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:00:30.481019  231632 config.go:182] Loaded profile config "test-preload-269501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0817 22:00:30.481125  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:00:30.484258  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.484603  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:30.484656  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.484843  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:00:30.485094  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:30.485274  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:30.485448  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:00:30.485638  231632 main.go:141] libmachine: Using SSH client type: native
	I0817 22:00:30.486121  231632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0817 22:00:30.486143  231632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:00:30.817517  231632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:00:30.817562  231632 machine.go:91] provisioned docker machine in 954.948359ms
	I0817 22:00:30.817575  231632 start.go:300] post-start starting for "test-preload-269501" (driver="kvm2")
	I0817 22:00:30.817589  231632 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:00:30.817618  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:00:30.817982  231632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:00:30.818017  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:00:30.820516  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.820896  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:30.820930  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.821042  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:00:30.821252  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:30.821404  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:00:30.821561  231632 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/test-preload-269501/id_rsa Username:docker}
	I0817 22:00:30.915832  231632 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:00:30.920050  231632 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:00:30.920075  231632 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:00:30.920187  231632 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:00:30.920262  231632 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:00:30.920349  231632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:00:30.928384  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:00:30.953479  231632 start.go:303] post-start completed in 135.887645ms
	I0817 22:00:30.953507  231632 fix.go:56] fixHost completed within 22.31696789s
	I0817 22:00:30.953531  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:00:30.956314  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.956617  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:30.956664  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:30.956831  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:00:30.957058  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:30.957214  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:30.957358  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:00:30.957530  231632 main.go:141] libmachine: Using SSH client type: native
	I0817 22:00:30.957924  231632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0817 22:00:30.957936  231632 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:00:31.090979  231632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692309631.073820887
	
	I0817 22:00:31.091008  231632 fix.go:206] guest clock: 1692309631.073820887
	I0817 22:00:31.091020  231632 fix.go:219] Guest: 2023-08-17 22:00:31.073820887 +0000 UTC Remote: 2023-08-17 22:00:30.953511585 +0000 UTC m=+30.276212979 (delta=120.309302ms)
	I0817 22:00:31.091048  231632 fix.go:190] guest clock delta is within tolerance: 120.309302ms
	I0817 22:00:31.091055  231632 start.go:83] releasing machines lock for "test-preload-269501", held for 22.454544479s
	I0817 22:00:31.091084  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:00:31.091374  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetIP
	I0817 22:00:31.093885  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:31.094292  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:31.094327  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:31.094428  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:00:31.094986  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:00:31.095164  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:00:31.095261  231632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:00:31.095319  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:00:31.095396  231632 ssh_runner.go:195] Run: cat /version.json
	I0817 22:00:31.095412  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:00:31.098156  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:31.098421  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:31.098463  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:31.098503  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:31.098624  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:00:31.098808  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:31.098865  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:31.098894  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:31.098947  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:00:31.099065  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:00:31.099153  231632 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/test-preload-269501/id_rsa Username:docker}
	I0817 22:00:31.099237  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:00:31.099367  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:00:31.099488  231632 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/test-preload-269501/id_rsa Username:docker}
	I0817 22:00:31.216173  231632 ssh_runner.go:195] Run: systemctl --version
	I0817 22:00:31.222377  231632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:00:31.368952  231632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:00:31.374827  231632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:00:31.374922  231632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:00:31.390236  231632 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:00:31.390266  231632 start.go:466] detecting cgroup driver to use...
	I0817 22:00:31.390375  231632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:00:31.407066  231632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:00:31.419092  231632 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:00:31.419152  231632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:00:31.431105  231632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:00:31.443337  231632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:00:31.547087  231632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:00:31.672084  231632 docker.go:212] disabling docker service ...
	I0817 22:00:31.672147  231632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:00:31.686444  231632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:00:31.698609  231632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:00:31.813466  231632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:00:31.928091  231632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:00:31.941467  231632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:00:31.959389  231632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0817 22:00:31.959466  231632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:00:31.968861  231632 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:00:31.968931  231632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:00:31.978426  231632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:00:31.988154  231632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:00:31.998141  231632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:00:32.008439  231632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:00:32.016739  231632 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:00:32.016794  231632 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:00:32.028946  231632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:00:32.038025  231632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:00:32.148881  231632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:00:32.321417  231632 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:00:32.321494  231632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:00:32.326411  231632 start.go:534] Will wait 60s for crictl version
	I0817 22:00:32.326467  231632 ssh_runner.go:195] Run: which crictl
	I0817 22:00:32.333259  231632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:00:32.362956  231632 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:00:32.363067  231632 ssh_runner.go:195] Run: crio --version
	I0817 22:00:32.417339  231632 ssh_runner.go:195] Run: crio --version
	I0817 22:00:32.462976  231632 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I0817 22:00:32.464830  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetIP
	I0817 22:00:32.467501  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:32.467850  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:00:32.467885  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:00:32.468094  231632 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 22:00:32.472324  231632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:00:32.485291  231632 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0817 22:00:32.485364  231632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:00:32.517989  231632 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0817 22:00:32.518083  231632 ssh_runner.go:195] Run: which lz4
	I0817 22:00:32.521966  231632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:00:32.526073  231632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:00:32.526107  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0817 22:00:34.372808  231632 crio.go:444] Took 1.850854 seconds to copy over tarball
	I0817 22:00:34.372894  231632 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:00:37.475336  231632 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102415889s)
	I0817 22:00:37.475364  231632 crio.go:451] Took 3.102525 seconds to extract the tarball
	I0817 22:00:37.475374  231632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:00:37.516824  231632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:00:37.561020  231632 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0817 22:00:37.561047  231632 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:00:37.561115  231632 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:00:37.561142  231632 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0817 22:00:37.561156  231632 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0817 22:00:37.561185  231632 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0817 22:00:37.561215  231632 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0817 22:00:37.561269  231632 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0817 22:00:37.561281  231632 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0817 22:00:37.561191  231632 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0817 22:00:37.562861  231632 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0817 22:00:37.562877  231632 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0817 22:00:37.562876  231632 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0817 22:00:37.562887  231632 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:00:37.562864  231632 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0817 22:00:37.562989  231632 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0817 22:00:37.563071  231632 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0817 22:00:37.563144  231632 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0817 22:00:37.741194  231632 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0817 22:00:37.743032  231632 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0817 22:00:37.747756  231632 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0817 22:00:37.748846  231632 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0817 22:00:37.749848  231632 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0817 22:00:37.755223  231632 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0817 22:00:37.756760  231632 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0817 22:00:37.849494  231632 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:00:37.901952  231632 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0817 22:00:37.901979  231632 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0817 22:00:37.902007  231632 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0817 22:00:37.902008  231632 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0817 22:00:37.902067  231632 ssh_runner.go:195] Run: which crictl
	I0817 22:00:37.902067  231632 ssh_runner.go:195] Run: which crictl
	I0817 22:00:37.931275  231632 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0817 22:00:37.931330  231632 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0817 22:00:37.931372  231632 ssh_runner.go:195] Run: which crictl
	I0817 22:00:37.951886  231632 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0817 22:00:37.951930  231632 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0817 22:00:37.951954  231632 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0817 22:00:37.951987  231632 ssh_runner.go:195] Run: which crictl
	I0817 22:00:37.951999  231632 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0817 22:00:37.952030  231632 ssh_runner.go:195] Run: which crictl
	I0817 22:00:37.967953  231632 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0817 22:00:37.968001  231632 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0817 22:00:37.968052  231632 ssh_runner.go:195] Run: which crictl
	I0817 22:00:37.968159  231632 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0817 22:00:37.968184  231632 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0817 22:00:37.968204  231632 ssh_runner.go:195] Run: which crictl
	I0817 22:00:38.091857  231632 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0817 22:00:38.091895  231632 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0817 22:00:38.091947  231632 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0817 22:00:38.092017  231632 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0817 22:00:38.092035  231632 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0817 22:00:38.092074  231632 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0817 22:00:38.092126  231632 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0817 22:00:38.178610  231632 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0817 22:00:38.178744  231632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0817 22:00:38.218042  231632 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0817 22:00:38.218146  231632 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0817 22:00:38.218197  231632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0817 22:00:38.218220  231632 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0817 22:00:38.218241  231632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0817 22:00:38.218278  231632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0817 22:00:38.218380  231632 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0817 22:00:38.218408  231632 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0817 22:00:38.218460  231632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0817 22:00:38.218467  231632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0817 22:00:38.218495  231632 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0817 22:00:38.218526  231632 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0817 22:00:38.218540  231632 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0817 22:00:38.218566  231632 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0817 22:00:38.218567  231632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0817 22:00:38.236245  231632 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0817 22:00:38.236355  231632 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0817 22:00:38.236365  231632 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0817 22:00:38.236428  231632 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0817 22:00:38.236470  231632 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0817 22:00:38.236624  231632 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0817 22:00:40.183943  231632 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (1.965348682s)
	I0817 22:00:40.183980  231632 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0817 22:00:40.184013  231632 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0817 22:00:40.184095  231632 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0817 22:00:40.624505  231632 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0817 22:00:40.624569  231632 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0817 22:00:40.624639  231632 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0817 22:00:41.366480  231632 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0817 22:00:41.366540  231632 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0817 22:00:41.366623  231632 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0817 22:00:41.807207  231632 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0817 22:00:41.807266  231632 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0817 22:00:41.807334  231632 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0817 22:00:41.957128  231632 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0817 22:00:41.957177  231632 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0817 22:00:41.957223  231632 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0817 22:00:42.805010  231632 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0817 22:00:42.805067  231632 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0817 22:00:42.805128  231632 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0817 22:00:45.071504  231632 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.266346301s)
	I0817 22:00:45.071540  231632 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0817 22:00:45.071568  231632 cache_images.go:123] Successfully loaded all cached images
	I0817 22:00:45.071573  231632 cache_images.go:92] LoadImages completed in 7.51051354s
	I0817 22:00:45.071633  231632 ssh_runner.go:195] Run: crio config
	I0817 22:00:45.131775  231632 cni.go:84] Creating CNI manager for ""
	I0817 22:00:45.131803  231632 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:00:45.131832  231632 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:00:45.131864  231632 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-269501 NodeName:test-preload-269501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:00:45.132033  231632 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-269501"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:00:45.132129  231632 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-269501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-269501 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:00:45.132204  231632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0817 22:00:45.142077  231632 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:00:45.142150  231632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:00:45.151322  231632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0817 22:00:45.167576  231632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:00:45.183997  231632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0817 22:00:45.200745  231632 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0817 22:00:45.204796  231632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:00:45.217990  231632 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501 for IP: 192.168.39.183
	I0817 22:00:45.218025  231632 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:00:45.218225  231632 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:00:45.218285  231632 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:00:45.218411  231632 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/client.key
	I0817 22:00:45.218480  231632 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/apiserver.key.a2b84326
	I0817 22:00:45.218515  231632 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/proxy-client.key
	I0817 22:00:45.218626  231632 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:00:45.218655  231632 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:00:45.218665  231632 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:00:45.218689  231632 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:00:45.218712  231632 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:00:45.218734  231632 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:00:45.218773  231632 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:00:45.219519  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:00:45.243721  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:00:45.266585  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:00:45.290186  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:00:45.314551  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:00:45.338178  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:00:45.361230  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:00:45.385809  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:00:45.409578  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:00:45.431908  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:00:45.454979  231632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:00:45.478290  231632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:00:45.494785  231632 ssh_runner.go:195] Run: openssl version
	I0817 22:00:45.500243  231632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:00:45.511235  231632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:00:45.516015  231632 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:00:45.516085  231632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:00:45.521725  231632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:00:45.532245  231632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:00:45.543041  231632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:00:45.548010  231632 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:00:45.548100  231632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:00:45.553647  231632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:00:45.564677  231632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:00:45.575604  231632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:00:45.580718  231632 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:00:45.580778  231632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:00:45.586598  231632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:00:45.597772  231632 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:00:45.602869  231632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:00:45.609092  231632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:00:45.615203  231632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:00:45.621293  231632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:00:45.627228  231632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:00:45.633033  231632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:00:45.638907  231632 kubeadm.go:404] StartCluster: {Name:test-preload-269501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-2695
01 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:00:45.639054  231632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:00:45.639109  231632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:00:45.669263  231632 cri.go:89] found id: ""
	I0817 22:00:45.669339  231632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:00:45.680351  231632 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:00:45.680379  231632 kubeadm.go:636] restartCluster start
	I0817 22:00:45.680427  231632 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:00:45.690582  231632 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:45.691071  231632 kubeconfig.go:135] verify returned: extract IP: "test-preload-269501" does not appear in /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:00:45.691204  231632 kubeconfig.go:146] "test-preload-269501" context is missing from /home/jenkins/minikube-integration/16865-203458/kubeconfig - will repair!
	I0817 22:00:45.691557  231632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:00:45.692201  231632 kapi.go:59] client config for test-preload-269501: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 22:00:45.693404  231632 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:00:45.703671  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:45.703788  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:45.716081  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:45.717525  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:45.717605  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:45.729433  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:46.230034  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:46.230202  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:46.242813  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:46.730427  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:46.730518  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:46.743461  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:47.230012  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:47.230121  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:47.242665  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:47.729830  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:47.729915  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:47.742505  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:48.230007  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:48.230120  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:48.242068  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:48.730114  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:48.730211  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:48.742823  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:49.230427  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:49.230565  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:49.243096  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:49.729668  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:49.729789  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:49.744174  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:50.229747  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:50.229835  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:50.243352  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:50.730510  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:50.730595  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:50.745124  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:51.229763  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:51.229878  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:51.242155  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:51.729699  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:51.729788  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:51.743385  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:52.229945  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:52.230080  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:52.244080  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:52.729541  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:52.729659  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:52.740384  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:53.229997  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:53.230097  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:53.244159  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:53.730325  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:53.730422  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:53.741038  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:54.230586  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:54.230678  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:54.241721  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:54.730361  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:54.730479  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:54.741619  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:55.230280  231632 api_server.go:166] Checking apiserver status ...
	I0817 22:00:55.230393  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:00:55.241553  231632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:00:55.704386  231632 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:00:55.704468  231632 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:00:55.704485  231632 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:00:55.704561  231632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:00:55.737788  231632 cri.go:89] found id: ""
	I0817 22:00:55.737867  231632 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:00:55.753579  231632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:00:55.762831  231632 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:00:55.762901  231632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:00:55.771661  231632 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:00:55.771684  231632 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:00:55.899723  231632 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:00:56.832800  231632 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:00:57.243364  231632 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:00:57.351295  231632 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:00:57.467983  231632 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:00:57.468075  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:00:57.487602  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:00:58.000591  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:00:58.500352  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:00:59.000891  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:00:59.501057  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:00:59.527280  231632 api_server.go:72] duration metric: took 2.059298798s to wait for apiserver process to appear ...
	I0817 22:00:59.527307  231632 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:00:59.527324  231632 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0817 22:01:04.489023  231632 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:01:04.489059  231632 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:01:04.489071  231632 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0817 22:01:04.546067  231632 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:01:04.546103  231632 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:01:05.046875  231632 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0817 22:01:05.054364  231632 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0817 22:01:05.054404  231632 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0817 22:01:05.547029  231632 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0817 22:01:05.554906  231632 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0817 22:01:05.554966  231632 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0817 22:01:06.047247  231632 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0817 22:01:06.054510  231632 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0817 22:01:06.065942  231632 api_server.go:141] control plane version: v1.24.4
	I0817 22:01:06.065983  231632 api_server.go:131] duration metric: took 6.53866816s to wait for apiserver health ...
	I0817 22:01:06.065995  231632 cni.go:84] Creating CNI manager for ""
	I0817 22:01:06.066003  231632 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:01:06.067924  231632 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:01:06.069523  231632 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:01:06.082321  231632 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:01:06.104108  231632 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:01:06.120285  231632 system_pods.go:59] 8 kube-system pods found
	I0817 22:01:06.120338  231632 system_pods.go:61] "coredns-6d4b75cb6d-24crz" [2e246c82-f058-48f6-b1c0-20429fa15324] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:01:06.120360  231632 system_pods.go:61] "coredns-6d4b75cb6d-vdk7w" [4579ff2c-8863-4fe0-89e9-782e93ef090a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:01:06.120370  231632 system_pods.go:61] "etcd-test-preload-269501" [791f193b-afae-4172-99b7-01f5a5910f6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:01:06.120385  231632 system_pods.go:61] "kube-apiserver-test-preload-269501" [12a556ae-800f-452a-b086-c1ef3b19758d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:01:06.120392  231632 system_pods.go:61] "kube-controller-manager-test-preload-269501" [e4c107fd-2c84-4730-b2dd-11ad10cde37e] Running
	I0817 22:01:06.120402  231632 system_pods.go:61] "kube-proxy-pj5r8" [2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:01:06.120415  231632 system_pods.go:61] "kube-scheduler-test-preload-269501" [e9fbd91d-7d6a-4f17-b35d-ffa9b4d13ed7] Running
	I0817 22:01:06.120423  231632 system_pods.go:61] "storage-provisioner" [62bf3a79-2a2d-4773-9110-8c7d9559295e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:01:06.120440  231632 system_pods.go:74] duration metric: took 16.298952ms to wait for pod list to return data ...
	I0817 22:01:06.120450  231632 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:01:06.124043  231632 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:01:06.124076  231632 node_conditions.go:123] node cpu capacity is 2
	I0817 22:01:06.124090  231632 node_conditions.go:105] duration metric: took 3.630451ms to run NodePressure ...
	I0817 22:01:06.124112  231632 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:01:06.341793  231632 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:01:06.346339  231632 kubeadm.go:787] kubelet initialised
	I0817 22:01:06.346362  231632 kubeadm.go:788] duration metric: took 4.542081ms waiting for restarted kubelet to initialise ...
	I0817 22:01:06.346370  231632 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:01:06.351581  231632 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-24crz" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:06.356917  231632 pod_ready.go:97] node "test-preload-269501" hosting pod "coredns-6d4b75cb6d-24crz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.356942  231632 pod_ready.go:81] duration metric: took 5.33682ms waiting for pod "coredns-6d4b75cb6d-24crz" in "kube-system" namespace to be "Ready" ...
	E0817 22:01:06.356953  231632 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-269501" hosting pod "coredns-6d4b75cb6d-24crz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.356964  231632 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-vdk7w" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:06.361886  231632 pod_ready.go:97] node "test-preload-269501" hosting pod "coredns-6d4b75cb6d-vdk7w" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.361914  231632 pod_ready.go:81] duration metric: took 4.940838ms waiting for pod "coredns-6d4b75cb6d-vdk7w" in "kube-system" namespace to be "Ready" ...
	E0817 22:01:06.361925  231632 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-269501" hosting pod "coredns-6d4b75cb6d-vdk7w" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.361937  231632 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:06.384145  231632 pod_ready.go:97] node "test-preload-269501" hosting pod "etcd-test-preload-269501" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.384191  231632 pod_ready.go:81] duration metric: took 22.244138ms waiting for pod "etcd-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	E0817 22:01:06.384206  231632 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-269501" hosting pod "etcd-test-preload-269501" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.384223  231632 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:06.513490  231632 pod_ready.go:97] node "test-preload-269501" hosting pod "kube-apiserver-test-preload-269501" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.513534  231632 pod_ready.go:81] duration metric: took 129.300379ms waiting for pod "kube-apiserver-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	E0817 22:01:06.513547  231632 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-269501" hosting pod "kube-apiserver-test-preload-269501" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.513564  231632 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:06.909186  231632 pod_ready.go:97] node "test-preload-269501" hosting pod "kube-controller-manager-test-preload-269501" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.909219  231632 pod_ready.go:81] duration metric: took 395.642526ms waiting for pod "kube-controller-manager-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	E0817 22:01:06.909227  231632 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-269501" hosting pod "kube-controller-manager-test-preload-269501" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:06.909236  231632 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pj5r8" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:07.308289  231632 pod_ready.go:97] node "test-preload-269501" hosting pod "kube-proxy-pj5r8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:07.308323  231632 pod_ready.go:81] duration metric: took 399.07835ms waiting for pod "kube-proxy-pj5r8" in "kube-system" namespace to be "Ready" ...
	E0817 22:01:07.308333  231632 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-269501" hosting pod "kube-proxy-pj5r8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:07.308339  231632 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:07.709142  231632 pod_ready.go:97] node "test-preload-269501" hosting pod "kube-scheduler-test-preload-269501" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:07.709169  231632 pod_ready.go:81] duration metric: took 400.824358ms waiting for pod "kube-scheduler-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	E0817 22:01:07.709179  231632 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-269501" hosting pod "kube-scheduler-test-preload-269501" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:07.709188  231632 pod_ready.go:38] duration metric: took 1.362807872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:01:07.709206  231632 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:01:07.721616  231632 ops.go:34] apiserver oom_adj: -16
	I0817 22:01:07.721647  231632 kubeadm.go:640] restartCluster took 22.041260283s
	I0817 22:01:07.721659  231632 kubeadm.go:406] StartCluster complete in 22.082760881s
	I0817 22:01:07.721679  231632 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:01:07.721779  231632 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:01:07.722914  231632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:01:07.723235  231632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:01:07.723421  231632 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:01:07.723501  231632 addons.go:69] Setting storage-provisioner=true in profile "test-preload-269501"
	I0817 22:01:07.723514  231632 addons.go:231] Setting addon storage-provisioner=true in "test-preload-269501"
	W0817 22:01:07.723521  231632 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:01:07.723471  231632 config.go:182] Loaded profile config "test-preload-269501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0817 22:01:07.723595  231632 host.go:66] Checking if "test-preload-269501" exists ...
	I0817 22:01:07.723599  231632 addons.go:69] Setting default-storageclass=true in profile "test-preload-269501"
	I0817 22:01:07.723621  231632 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-269501"
	I0817 22:01:07.723940  231632 kapi.go:59] client config for test-preload-269501: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 22:01:07.724039  231632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:01:07.724085  231632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:01:07.724127  231632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:01:07.724254  231632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:01:07.728189  231632 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-269501" context rescaled to 1 replicas
	I0817 22:01:07.728231  231632 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:01:07.730596  231632 out.go:177] * Verifying Kubernetes components...
	I0817 22:01:07.732288  231632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:01:07.739901  231632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0817 22:01:07.740383  231632 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:01:07.740914  231632 main.go:141] libmachine: Using API Version  1
	I0817 22:01:07.740935  231632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:01:07.741379  231632 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:01:07.741632  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetState
	I0817 22:01:07.744157  231632 kapi.go:59] client config for test-preload-269501: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/profiles/test-preload-269501/client.key", CAFile:"/home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 22:01:07.744371  231632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0817 22:01:07.744821  231632 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:01:07.745405  231632 main.go:141] libmachine: Using API Version  1
	I0817 22:01:07.745432  231632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:01:07.745778  231632 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:01:07.746298  231632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:01:07.746361  231632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:01:07.752513  231632 addons.go:231] Setting addon default-storageclass=true in "test-preload-269501"
	W0817 22:01:07.752540  231632 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:01:07.752574  231632 host.go:66] Checking if "test-preload-269501" exists ...
	I0817 22:01:07.752927  231632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:01:07.752975  231632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:01:07.762392  231632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39989
	I0817 22:01:07.762959  231632 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:01:07.763573  231632 main.go:141] libmachine: Using API Version  1
	I0817 22:01:07.763607  231632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:01:07.764043  231632 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:01:07.764349  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetState
	I0817 22:01:07.766251  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:01:07.769137  231632 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:01:07.769023  231632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40919
	I0817 22:01:07.771163  231632 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:01:07.771188  231632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:01:07.771218  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:01:07.771529  231632 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:01:07.772030  231632 main.go:141] libmachine: Using API Version  1
	I0817 22:01:07.772053  231632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:01:07.772579  231632 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:01:07.773313  231632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:01:07.773366  231632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:01:07.774746  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:01:07.775101  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:01:07.775126  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:01:07.775253  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:01:07.775473  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:01:07.775618  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:01:07.775791  231632 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/test-preload-269501/id_rsa Username:docker}
	I0817 22:01:07.789312  231632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0817 22:01:07.789880  231632 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:01:07.790518  231632 main.go:141] libmachine: Using API Version  1
	I0817 22:01:07.790543  231632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:01:07.790947  231632 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:01:07.791166  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetState
	I0817 22:01:07.793142  231632 main.go:141] libmachine: (test-preload-269501) Calling .DriverName
	I0817 22:01:07.793463  231632 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:01:07.793482  231632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:01:07.793508  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHHostname
	I0817 22:01:07.797034  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:01:07.797529  231632 main.go:141] libmachine: (test-preload-269501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:9a:34", ip: ""} in network mk-test-preload-269501: {Iface:virbr1 ExpiryTime:2023-08-17 23:00:21 +0000 UTC Type:0 Mac:52:54:00:1b:9a:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:test-preload-269501 Clientid:01:52:54:00:1b:9a:34}
	I0817 22:01:07.797568  231632 main.go:141] libmachine: (test-preload-269501) DBG | domain test-preload-269501 has defined IP address 192.168.39.183 and MAC address 52:54:00:1b:9a:34 in network mk-test-preload-269501
	I0817 22:01:07.797828  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHPort
	I0817 22:01:07.798039  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHKeyPath
	I0817 22:01:07.798510  231632 main.go:141] libmachine: (test-preload-269501) Calling .GetSSHUsername
	I0817 22:01:07.798678  231632 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/test-preload-269501/id_rsa Username:docker}
	I0817 22:01:07.933651  231632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:01:07.947649  231632 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 22:01:07.947715  231632 node_ready.go:35] waiting up to 6m0s for node "test-preload-269501" to be "Ready" ...
	I0817 22:01:07.960354  231632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:01:08.905814  231632 main.go:141] libmachine: Making call to close driver server
	I0817 22:01:08.905849  231632 main.go:141] libmachine: (test-preload-269501) Calling .Close
	I0817 22:01:08.906244  231632 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:01:08.906295  231632 main.go:141] libmachine: (test-preload-269501) DBG | Closing plugin on server side
	I0817 22:01:08.906317  231632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:01:08.906333  231632 main.go:141] libmachine: Making call to close driver server
	I0817 22:01:08.906347  231632 main.go:141] libmachine: (test-preload-269501) Calling .Close
	I0817 22:01:08.906613  231632 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:01:08.906635  231632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:01:08.921029  231632 main.go:141] libmachine: Making call to close driver server
	I0817 22:01:08.921062  231632 main.go:141] libmachine: (test-preload-269501) Calling .Close
	I0817 22:01:08.921430  231632 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:01:08.921455  231632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:01:08.921452  231632 main.go:141] libmachine: (test-preload-269501) DBG | Closing plugin on server side
	I0817 22:01:08.921482  231632 main.go:141] libmachine: Making call to close driver server
	I0817 22:01:08.921495  231632 main.go:141] libmachine: (test-preload-269501) Calling .Close
	I0817 22:01:08.921786  231632 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:01:08.921800  231632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:01:08.921814  231632 main.go:141] libmachine: Making call to close driver server
	I0817 22:01:08.921826  231632 main.go:141] libmachine: (test-preload-269501) Calling .Close
	I0817 22:01:08.922160  231632 main.go:141] libmachine: (test-preload-269501) DBG | Closing plugin on server side
	I0817 22:01:08.922179  231632 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:01:08.922192  231632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:01:08.925733  231632 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 22:01:08.927262  231632 addons.go:502] enable addons completed in 1.203850185s: enabled=[storage-provisioner default-storageclass]
	I0817 22:01:10.112899  231632 node_ready.go:58] node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:12.613029  231632 node_ready.go:58] node "test-preload-269501" has status "Ready":"False"
	I0817 22:01:15.114380  231632 node_ready.go:49] node "test-preload-269501" has status "Ready":"True"
	I0817 22:01:15.114408  231632 node_ready.go:38] duration metric: took 7.16666054s waiting for node "test-preload-269501" to be "Ready" ...
	I0817 22:01:15.114417  231632 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:01:15.120459  231632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vdk7w" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:15.125790  231632 pod_ready.go:92] pod "coredns-6d4b75cb6d-vdk7w" in "kube-system" namespace has status "Ready":"True"
	I0817 22:01:15.125815  231632 pod_ready.go:81] duration metric: took 5.326473ms waiting for pod "coredns-6d4b75cb6d-vdk7w" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:15.125828  231632 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:17.154111  231632 pod_ready.go:102] pod "etcd-test-preload-269501" in "kube-system" namespace has status "Ready":"False"
	I0817 22:01:18.644273  231632 pod_ready.go:92] pod "etcd-test-preload-269501" in "kube-system" namespace has status "Ready":"True"
	I0817 22:01:18.644301  231632 pod_ready.go:81] duration metric: took 3.518466823s waiting for pod "etcd-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:18.644311  231632 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:18.653816  231632 pod_ready.go:92] pod "kube-apiserver-test-preload-269501" in "kube-system" namespace has status "Ready":"True"
	I0817 22:01:18.653840  231632 pod_ready.go:81] duration metric: took 9.523936ms waiting for pod "kube-apiserver-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:18.653851  231632 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:18.662472  231632 pod_ready.go:92] pod "kube-controller-manager-test-preload-269501" in "kube-system" namespace has status "Ready":"True"
	I0817 22:01:18.662502  231632 pod_ready.go:81] duration metric: took 8.642666ms waiting for pod "kube-controller-manager-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:18.662519  231632 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pj5r8" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:18.713054  231632 pod_ready.go:92] pod "kube-proxy-pj5r8" in "kube-system" namespace has status "Ready":"True"
	I0817 22:01:18.713079  231632 pod_ready.go:81] duration metric: took 50.552319ms waiting for pod "kube-proxy-pj5r8" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:18.713089  231632 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:19.114254  231632 pod_ready.go:92] pod "kube-scheduler-test-preload-269501" in "kube-system" namespace has status "Ready":"True"
	I0817 22:01:19.114283  231632 pod_ready.go:81] duration metric: took 401.187785ms waiting for pod "kube-scheduler-test-preload-269501" in "kube-system" namespace to be "Ready" ...
	I0817 22:01:19.114297  231632 pod_ready.go:38] duration metric: took 3.999866075s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:01:19.114322  231632 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:01:19.114396  231632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:01:19.128355  231632 api_server.go:72] duration metric: took 11.400063147s to wait for apiserver process to appear ...
	I0817 22:01:19.128384  231632 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:01:19.128405  231632 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0817 22:01:19.134821  231632 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0817 22:01:19.135928  231632 api_server.go:141] control plane version: v1.24.4
	I0817 22:01:19.135951  231632 api_server.go:131] duration metric: took 7.560094ms to wait for apiserver health ...
	I0817 22:01:19.135962  231632 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:01:19.315838  231632 system_pods.go:59] 7 kube-system pods found
	I0817 22:01:19.315874  231632 system_pods.go:61] "coredns-6d4b75cb6d-vdk7w" [4579ff2c-8863-4fe0-89e9-782e93ef090a] Running
	I0817 22:01:19.315878  231632 system_pods.go:61] "etcd-test-preload-269501" [791f193b-afae-4172-99b7-01f5a5910f6f] Running
	I0817 22:01:19.315882  231632 system_pods.go:61] "kube-apiserver-test-preload-269501" [12a556ae-800f-452a-b086-c1ef3b19758d] Running
	I0817 22:01:19.315886  231632 system_pods.go:61] "kube-controller-manager-test-preload-269501" [e4c107fd-2c84-4730-b2dd-11ad10cde37e] Running
	I0817 22:01:19.315890  231632 system_pods.go:61] "kube-proxy-pj5r8" [2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4] Running
	I0817 22:01:19.315894  231632 system_pods.go:61] "kube-scheduler-test-preload-269501" [e9fbd91d-7d6a-4f17-b35d-ffa9b4d13ed7] Running
	I0817 22:01:19.315898  231632 system_pods.go:61] "storage-provisioner" [62bf3a79-2a2d-4773-9110-8c7d9559295e] Running
	I0817 22:01:19.315903  231632 system_pods.go:74] duration metric: took 179.93576ms to wait for pod list to return data ...
	I0817 22:01:19.315912  231632 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:01:19.512647  231632 default_sa.go:45] found service account: "default"
	I0817 22:01:19.512684  231632 default_sa.go:55] duration metric: took 196.76586ms for default service account to be created ...
	I0817 22:01:19.512696  231632 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:01:19.721704  231632 system_pods.go:86] 7 kube-system pods found
	I0817 22:01:19.721746  231632 system_pods.go:89] "coredns-6d4b75cb6d-vdk7w" [4579ff2c-8863-4fe0-89e9-782e93ef090a] Running
	I0817 22:01:19.721755  231632 system_pods.go:89] "etcd-test-preload-269501" [791f193b-afae-4172-99b7-01f5a5910f6f] Running
	I0817 22:01:19.721761  231632 system_pods.go:89] "kube-apiserver-test-preload-269501" [12a556ae-800f-452a-b086-c1ef3b19758d] Running
	I0817 22:01:19.721767  231632 system_pods.go:89] "kube-controller-manager-test-preload-269501" [e4c107fd-2c84-4730-b2dd-11ad10cde37e] Running
	I0817 22:01:19.721773  231632 system_pods.go:89] "kube-proxy-pj5r8" [2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4] Running
	I0817 22:01:19.721779  231632 system_pods.go:89] "kube-scheduler-test-preload-269501" [e9fbd91d-7d6a-4f17-b35d-ffa9b4d13ed7] Running
	I0817 22:01:19.721784  231632 system_pods.go:89] "storage-provisioner" [62bf3a79-2a2d-4773-9110-8c7d9559295e] Running
	I0817 22:01:19.721792  231632 system_pods.go:126] duration metric: took 209.090178ms to wait for k8s-apps to be running ...
	I0817 22:01:19.721802  231632 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:01:19.721858  231632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:01:19.736589  231632 system_svc.go:56] duration metric: took 14.775088ms WaitForService to wait for kubelet.
	I0817 22:01:19.736637  231632 kubeadm.go:581] duration metric: took 12.008346522s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:01:19.736664  231632 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:01:19.914988  231632 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:01:19.915025  231632 node_conditions.go:123] node cpu capacity is 2
	I0817 22:01:19.915038  231632 node_conditions.go:105] duration metric: took 178.367822ms to run NodePressure ...
	I0817 22:01:19.915052  231632 start.go:228] waiting for startup goroutines ...
	I0817 22:01:19.915058  231632 start.go:233] waiting for cluster config update ...
	I0817 22:01:19.915072  231632 start.go:242] writing updated cluster config ...
	I0817 22:01:19.915466  231632 ssh_runner.go:195] Run: rm -f paused
	I0817 22:01:19.965949  231632 start.go:600] kubectl: 1.28.0, cluster: 1.24.4 (minor skew: 4)
	I0817 22:01:19.968150  231632 out.go:177] 
	W0817 22:01:19.969720  231632 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0817 22:01:19.971208  231632 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0817 22:01:19.973197  231632 out.go:177] * Done! kubectl is now configured to use "test-preload-269501" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 22:00:20 UTC, ends at Thu 2023-08-17 22:01:21 UTC. --
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.617326799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192,PodSandboxId:32b29526fccbfebd1290099229906cd92cf1165ca279739c235e66153001b264,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1692309670075317940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vdk7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4579ff2c-8863-4fe0-89e9-782e93ef090a,},Annotations:map[string]string{io.kubernetes.container.hash: d7f581f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99,PodSandboxId:fad8e0057a80737341ec23339ca593baf4491b3f8217d9d69230473cbbce431c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1692309667164244988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj5r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4,},Annotations:map[string]string{io.kubernetes.container.hash: b8749961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d,PodSandboxId:5394a1723568996aa61e10e7af7610d04985439fd3a514b2eced2662f36c14ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692309666879009389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
2bf3a79-2a2d-4773-9110-8c7d9559295e,},Annotations:map[string]string{io.kubernetes.container.hash: e4803de4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8,PodSandboxId:44334ae02f2f112e72f39e3688e71f149a078b4cc1a04be0d1ba6bec5cecec08,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1692309659281891931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c74aa38e8dd23a7ad12e58377c0a93,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b61fe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae,PodSandboxId:67142a2c744fb1c912191826b598aed7fa06c082775b0371d46370429111f7cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1692309659029670428,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51370bb48177906866411b0bf5b36d98,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290,PodSandboxId:c1697f1ab1b8c996a0a3fd93b3d742e998cff892c26136d29455f8b8fee5c631,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1692309658749747720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df1918011df299911748c3948c38a3,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184,PodSandboxId:c2ea0de7abf3871d5dc942592e824de1e963257e81def09c642bacbd16d56155,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1692309658444177040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a738f6a0129490f40ace8425e92eb6ba,},Annotations:map[string
]string{io.kubernetes.container.hash: c30e65bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=71c472bc-da17-4a90-bba5-571f16f5314c name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.720905173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=75c23904-94f6-49d2-940d-089f0918ff52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.720997884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=75c23904-94f6-49d2-940d-089f0918ff52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.721176022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192,PodSandboxId:32b29526fccbfebd1290099229906cd92cf1165ca279739c235e66153001b264,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1692309670075317940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vdk7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4579ff2c-8863-4fe0-89e9-782e93ef090a,},Annotations:map[string]string{io.kubernetes.container.hash: d7f581f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99,PodSandboxId:fad8e0057a80737341ec23339ca593baf4491b3f8217d9d69230473cbbce431c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1692309667164244988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj5r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4,},Annotations:map[string]string{io.kubernetes.container.hash: b8749961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d,PodSandboxId:5394a1723568996aa61e10e7af7610d04985439fd3a514b2eced2662f36c14ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692309666879009389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
2bf3a79-2a2d-4773-9110-8c7d9559295e,},Annotations:map[string]string{io.kubernetes.container.hash: e4803de4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8,PodSandboxId:44334ae02f2f112e72f39e3688e71f149a078b4cc1a04be0d1ba6bec5cecec08,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1692309659281891931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c74aa38e8dd23a7ad12e58377c0a93,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b61fe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae,PodSandboxId:67142a2c744fb1c912191826b598aed7fa06c082775b0371d46370429111f7cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1692309659029670428,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51370bb48177906866411b0bf5b36d98,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290,PodSandboxId:c1697f1ab1b8c996a0a3fd93b3d742e998cff892c26136d29455f8b8fee5c631,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1692309658749747720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df1918011df299911748c3948c38a3,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184,PodSandboxId:c2ea0de7abf3871d5dc942592e824de1e963257e81def09c642bacbd16d56155,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1692309658444177040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a738f6a0129490f40ace8425e92eb6ba,},Annotations:map[string
]string{io.kubernetes.container.hash: c30e65bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=75c23904-94f6-49d2-940d-089f0918ff52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.758547723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=62610687-98b1-4759-8501-9b21c9a43af9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.758641235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=62610687-98b1-4759-8501-9b21c9a43af9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.758832156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192,PodSandboxId:32b29526fccbfebd1290099229906cd92cf1165ca279739c235e66153001b264,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1692309670075317940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vdk7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4579ff2c-8863-4fe0-89e9-782e93ef090a,},Annotations:map[string]string{io.kubernetes.container.hash: d7f581f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99,PodSandboxId:fad8e0057a80737341ec23339ca593baf4491b3f8217d9d69230473cbbce431c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1692309667164244988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj5r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4,},Annotations:map[string]string{io.kubernetes.container.hash: b8749961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d,PodSandboxId:5394a1723568996aa61e10e7af7610d04985439fd3a514b2eced2662f36c14ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692309666879009389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
2bf3a79-2a2d-4773-9110-8c7d9559295e,},Annotations:map[string]string{io.kubernetes.container.hash: e4803de4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8,PodSandboxId:44334ae02f2f112e72f39e3688e71f149a078b4cc1a04be0d1ba6bec5cecec08,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1692309659281891931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c74aa38e8dd23a7ad12e58377c0a93,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b61fe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae,PodSandboxId:67142a2c744fb1c912191826b598aed7fa06c082775b0371d46370429111f7cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1692309659029670428,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51370bb48177906866411b0bf5b36d98,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290,PodSandboxId:c1697f1ab1b8c996a0a3fd93b3d742e998cff892c26136d29455f8b8fee5c631,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1692309658749747720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df1918011df299911748c3948c38a3,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184,PodSandboxId:c2ea0de7abf3871d5dc942592e824de1e963257e81def09c642bacbd16d56155,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1692309658444177040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a738f6a0129490f40ace8425e92eb6ba,},Annotations:map[string
]string{io.kubernetes.container.hash: c30e65bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=62610687-98b1-4759-8501-9b21c9a43af9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.793595762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f5ffe60c-c091-49cf-89ee-6159c2cfffac name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.793734793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f5ffe60c-c091-49cf-89ee-6159c2cfffac name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.793920914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192,PodSandboxId:32b29526fccbfebd1290099229906cd92cf1165ca279739c235e66153001b264,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1692309670075317940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vdk7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4579ff2c-8863-4fe0-89e9-782e93ef090a,},Annotations:map[string]string{io.kubernetes.container.hash: d7f581f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99,PodSandboxId:fad8e0057a80737341ec23339ca593baf4491b3f8217d9d69230473cbbce431c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1692309667164244988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj5r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4,},Annotations:map[string]string{io.kubernetes.container.hash: b8749961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d,PodSandboxId:5394a1723568996aa61e10e7af7610d04985439fd3a514b2eced2662f36c14ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692309666879009389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
2bf3a79-2a2d-4773-9110-8c7d9559295e,},Annotations:map[string]string{io.kubernetes.container.hash: e4803de4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8,PodSandboxId:44334ae02f2f112e72f39e3688e71f149a078b4cc1a04be0d1ba6bec5cecec08,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1692309659281891931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c74aa38e8dd23a7ad12e58377c0a93,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b61fe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae,PodSandboxId:67142a2c744fb1c912191826b598aed7fa06c082775b0371d46370429111f7cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1692309659029670428,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51370bb48177906866411b0bf5b36d98,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290,PodSandboxId:c1697f1ab1b8c996a0a3fd93b3d742e998cff892c26136d29455f8b8fee5c631,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1692309658749747720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df1918011df299911748c3948c38a3,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184,PodSandboxId:c2ea0de7abf3871d5dc942592e824de1e963257e81def09c642bacbd16d56155,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1692309658444177040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a738f6a0129490f40ace8425e92eb6ba,},Annotations:map[string
]string{io.kubernetes.container.hash: c30e65bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f5ffe60c-c091-49cf-89ee-6159c2cfffac name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.833048464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ad85103a-68f2-47ec-b4ff-61316165b424 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.833115133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ad85103a-68f2-47ec-b4ff-61316165b424 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.833296186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192,PodSandboxId:32b29526fccbfebd1290099229906cd92cf1165ca279739c235e66153001b264,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1692309670075317940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vdk7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4579ff2c-8863-4fe0-89e9-782e93ef090a,},Annotations:map[string]string{io.kubernetes.container.hash: d7f581f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99,PodSandboxId:fad8e0057a80737341ec23339ca593baf4491b3f8217d9d69230473cbbce431c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1692309667164244988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj5r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4,},Annotations:map[string]string{io.kubernetes.container.hash: b8749961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d,PodSandboxId:5394a1723568996aa61e10e7af7610d04985439fd3a514b2eced2662f36c14ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692309666879009389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
2bf3a79-2a2d-4773-9110-8c7d9559295e,},Annotations:map[string]string{io.kubernetes.container.hash: e4803de4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8,PodSandboxId:44334ae02f2f112e72f39e3688e71f149a078b4cc1a04be0d1ba6bec5cecec08,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1692309659281891931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c74aa38e8dd23a7ad12e58377c0a93,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b61fe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae,PodSandboxId:67142a2c744fb1c912191826b598aed7fa06c082775b0371d46370429111f7cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1692309659029670428,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51370bb48177906866411b0bf5b36d98,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290,PodSandboxId:c1697f1ab1b8c996a0a3fd93b3d742e998cff892c26136d29455f8b8fee5c631,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1692309658749747720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df1918011df299911748c3948c38a3,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184,PodSandboxId:c2ea0de7abf3871d5dc942592e824de1e963257e81def09c642bacbd16d56155,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1692309658444177040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a738f6a0129490f40ace8425e92eb6ba,},Annotations:map[string
]string{io.kubernetes.container.hash: c30e65bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ad85103a-68f2-47ec-b4ff-61316165b424 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.869592580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=860892fc-de3b-420c-8906-fe81651fbd6e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.869682159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=860892fc-de3b-420c-8906-fe81651fbd6e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.869858503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192,PodSandboxId:32b29526fccbfebd1290099229906cd92cf1165ca279739c235e66153001b264,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1692309670075317940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vdk7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4579ff2c-8863-4fe0-89e9-782e93ef090a,},Annotations:map[string]string{io.kubernetes.container.hash: d7f581f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99,PodSandboxId:fad8e0057a80737341ec23339ca593baf4491b3f8217d9d69230473cbbce431c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1692309667164244988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj5r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4,},Annotations:map[string]string{io.kubernetes.container.hash: b8749961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d,PodSandboxId:5394a1723568996aa61e10e7af7610d04985439fd3a514b2eced2662f36c14ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692309666879009389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
2bf3a79-2a2d-4773-9110-8c7d9559295e,},Annotations:map[string]string{io.kubernetes.container.hash: e4803de4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8,PodSandboxId:44334ae02f2f112e72f39e3688e71f149a078b4cc1a04be0d1ba6bec5cecec08,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1692309659281891931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c74aa38e8dd23a7ad12e58377c0a93,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b61fe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae,PodSandboxId:67142a2c744fb1c912191826b598aed7fa06c082775b0371d46370429111f7cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1692309659029670428,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51370bb48177906866411b0bf5b36d98,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290,PodSandboxId:c1697f1ab1b8c996a0a3fd93b3d742e998cff892c26136d29455f8b8fee5c631,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1692309658749747720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df1918011df299911748c3948c38a3,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184,PodSandboxId:c2ea0de7abf3871d5dc942592e824de1e963257e81def09c642bacbd16d56155,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1692309658444177040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a738f6a0129490f40ace8425e92eb6ba,},Annotations:map[string
]string{io.kubernetes.container.hash: c30e65bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=860892fc-de3b-420c-8906-fe81651fbd6e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.905941908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6d22b992-4053-4960-a0d8-3a0647e201ae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.906004959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6d22b992-4053-4960-a0d8-3a0647e201ae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.909726758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192,PodSandboxId:32b29526fccbfebd1290099229906cd92cf1165ca279739c235e66153001b264,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1692309670075317940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vdk7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4579ff2c-8863-4fe0-89e9-782e93ef090a,},Annotations:map[string]string{io.kubernetes.container.hash: d7f581f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99,PodSandboxId:fad8e0057a80737341ec23339ca593baf4491b3f8217d9d69230473cbbce431c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1692309667164244988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj5r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4,},Annotations:map[string]string{io.kubernetes.container.hash: b8749961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d,PodSandboxId:5394a1723568996aa61e10e7af7610d04985439fd3a514b2eced2662f36c14ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692309666879009389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
2bf3a79-2a2d-4773-9110-8c7d9559295e,},Annotations:map[string]string{io.kubernetes.container.hash: e4803de4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8,PodSandboxId:44334ae02f2f112e72f39e3688e71f149a078b4cc1a04be0d1ba6bec5cecec08,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1692309659281891931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c74aa38e8dd23a7ad12e58377c0a93,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b61fe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae,PodSandboxId:67142a2c744fb1c912191826b598aed7fa06c082775b0371d46370429111f7cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1692309659029670428,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51370bb48177906866411b0bf5b36d98,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290,PodSandboxId:c1697f1ab1b8c996a0a3fd93b3d742e998cff892c26136d29455f8b8fee5c631,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1692309658749747720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df1918011df299911748c3948c38a3,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184,PodSandboxId:c2ea0de7abf3871d5dc942592e824de1e963257e81def09c642bacbd16d56155,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1692309658444177040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a738f6a0129490f40ace8425e92eb6ba,},Annotations:map[string
]string{io.kubernetes.container.hash: c30e65bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6d22b992-4053-4960-a0d8-3a0647e201ae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.949214481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d166b93e-5b76-4067-9d69-76a4667f10e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.949277177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d166b93e-5b76-4067-9d69-76a4667f10e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.949590162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192,PodSandboxId:32b29526fccbfebd1290099229906cd92cf1165ca279739c235e66153001b264,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1692309670075317940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vdk7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4579ff2c-8863-4fe0-89e9-782e93ef090a,},Annotations:map[string]string{io.kubernetes.container.hash: d7f581f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99,PodSandboxId:fad8e0057a80737341ec23339ca593baf4491b3f8217d9d69230473cbbce431c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1692309667164244988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj5r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4,},Annotations:map[string]string{io.kubernetes.container.hash: b8749961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d,PodSandboxId:5394a1723568996aa61e10e7af7610d04985439fd3a514b2eced2662f36c14ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692309666879009389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
2bf3a79-2a2d-4773-9110-8c7d9559295e,},Annotations:map[string]string{io.kubernetes.container.hash: e4803de4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8,PodSandboxId:44334ae02f2f112e72f39e3688e71f149a078b4cc1a04be0d1ba6bec5cecec08,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1692309659281891931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c74aa38e8dd23a7ad12e58377c0a93,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b61fe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae,PodSandboxId:67142a2c744fb1c912191826b598aed7fa06c082775b0371d46370429111f7cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1692309659029670428,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51370bb48177906866411b0bf5b36d98,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290,PodSandboxId:c1697f1ab1b8c996a0a3fd93b3d742e998cff892c26136d29455f8b8fee5c631,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1692309658749747720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df1918011df299911748c3948c38a3,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184,PodSandboxId:c2ea0de7abf3871d5dc942592e824de1e963257e81def09c642bacbd16d56155,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1692309658444177040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a738f6a0129490f40ace8425e92eb6ba,},Annotations:map[string
]string{io.kubernetes.container.hash: c30e65bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d166b93e-5b76-4067-9d69-76a4667f10e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.983188950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=edfb380e-eb03-4f76-898c-5cf7b7fd9427 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.983258672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=edfb380e-eb03-4f76-898c-5cf7b7fd9427 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:01:20 test-preload-269501 crio[711]: time="2023-08-17 22:01:20.983519145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192,PodSandboxId:32b29526fccbfebd1290099229906cd92cf1165ca279739c235e66153001b264,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1692309670075317940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vdk7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4579ff2c-8863-4fe0-89e9-782e93ef090a,},Annotations:map[string]string{io.kubernetes.container.hash: d7f581f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99,PodSandboxId:fad8e0057a80737341ec23339ca593baf4491b3f8217d9d69230473cbbce431c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1692309667164244988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj5r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4,},Annotations:map[string]string{io.kubernetes.container.hash: b8749961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d,PodSandboxId:5394a1723568996aa61e10e7af7610d04985439fd3a514b2eced2662f36c14ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692309666879009389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
2bf3a79-2a2d-4773-9110-8c7d9559295e,},Annotations:map[string]string{io.kubernetes.container.hash: e4803de4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8,PodSandboxId:44334ae02f2f112e72f39e3688e71f149a078b4cc1a04be0d1ba6bec5cecec08,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1692309659281891931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c74aa38e8dd23a7ad12e58377c0a93,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b61fe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae,PodSandboxId:67142a2c744fb1c912191826b598aed7fa06c082775b0371d46370429111f7cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1692309659029670428,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51370bb48177906866411b0bf5b36d98,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290,PodSandboxId:c1697f1ab1b8c996a0a3fd93b3d742e998cff892c26136d29455f8b8fee5c631,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1692309658749747720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df1918011df299911748c3948c38a3,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184,PodSandboxId:c2ea0de7abf3871d5dc942592e824de1e963257e81def09c642bacbd16d56155,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1692309658444177040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-269501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a738f6a0129490f40ace8425e92eb6ba,},Annotations:map[string
]string{io.kubernetes.container.hash: c30e65bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=edfb380e-eb03-4f76-898c-5cf7b7fd9427 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	ab68afac713c4       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   10 seconds ago      Running             coredns                   1                   32b29526fccbf
	ac80d55a7be66       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   fad8e0057a807
	a9607a46ca0ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   5394a17235689
	09c4d6da4870f       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   44334ae02f2f1
	17bbee20d0346       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   22 seconds ago      Running             kube-scheduler            1                   67142a2c744fb
	5e580423ab32c       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   c1697f1ab1b8c
	abc7d517f8481       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   c2ea0de7abf38
	
	* 
	* ==> coredns [ab68afac713c4ad49a592afb5302dbda75f67019c4c5b42d220dccf3c4683192] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:37673 - 65348 "HINFO IN 2301057823059225305.1698918840150236217. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012202171s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-269501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-269501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=test-preload-269501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_59_33_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:59:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-269501
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 22:01:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 22:01:14 +0000   Thu, 17 Aug 2023 21:59:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 22:01:14 +0000   Thu, 17 Aug 2023 21:59:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 22:01:14 +0000   Thu, 17 Aug 2023 21:59:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 22:01:14 +0000   Thu, 17 Aug 2023 22:01:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    test-preload-269501
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f0b064165894b1c8018e216a6923a71
	  System UUID:                2f0b0641-6589-4b1c-8018-e216a6923a71
	  Boot ID:                    70c99a10-b17c-471a-9bc1-660912aa24d1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-vdk7w                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     96s
	  kube-system                 etcd-test-preload-269501                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         108s
	  kube-system                 kube-apiserver-test-preload-269501             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-controller-manager-test-preload-269501    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-proxy-pj5r8                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-scheduler-test-preload-269501             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x5 over 118s)  kubelet          Node test-preload-269501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     118s (x4 over 118s)  kubelet          Node test-preload-269501 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s (x5 over 118s)  kubelet          Node test-preload-269501 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node test-preload-269501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node test-preload-269501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node test-preload-269501 status is now: NodeHasSufficientPID
	  Normal  NodeReady                97s                  kubelet          Node test-preload-269501 status is now: NodeReady
	  Normal  RegisteredNode           97s                  node-controller  Node test-preload-269501 event: Registered Node test-preload-269501 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node test-preload-269501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node test-preload-269501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node test-preload-269501 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-269501 event: Registered Node test-preload-269501 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug17 22:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074899] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.393992] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.574962] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153532] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.447922] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000031] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000024] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.608497] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.113573] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.154476] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.112543] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.220854] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +25.080020] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[Aug17 22:01] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.811433] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [09c4d6da4870f147d0eeaef4d2903c1fc41b904a6446b47d26c8662a53e61ef8] <==
	* {"level":"info","ts":"2023-08-17T22:01:00.727Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"f87838631c8138de","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-08-17T22:01:00.729Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-17T22:01:00.730Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f87838631c8138de","initial-advertise-peer-urls":["https://192.168.39.183:2380"],"listen-peer-urls":["https://192.168.39.183:2380"],"advertise-client-urls":["https://192.168.39.183:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.183:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-17T22:01:00.729Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-08-17T22:01:00.730Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de switched to configuration voters=(17904122316942555358)"}
	{"level":"info","ts":"2023-08-17T22:01:00.730Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2023-08-17T22:01:00.730Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2023-08-17T22:01:00.730Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-17T22:01:00.731Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dc4003dc2fbf749","local-member-id":"f87838631c8138de","added-peer-id":"f87838631c8138de","added-peer-peer-urls":["https://192.168.39.183:2380"]}
	{"level":"info","ts":"2023-08-17T22:01:00.731Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dc4003dc2fbf749","local-member-id":"f87838631c8138de","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:01:00.731Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:01:01.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-17T22:01:01.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-17T22:01:01.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de received MsgPreVoteResp from f87838631c8138de at term 2"}
	{"level":"info","ts":"2023-08-17T22:01:01.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became candidate at term 3"}
	{"level":"info","ts":"2023-08-17T22:01:01.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de received MsgVoteResp from f87838631c8138de at term 3"}
	{"level":"info","ts":"2023-08-17T22:01:01.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became leader at term 3"}
	{"level":"info","ts":"2023-08-17T22:01:01.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f87838631c8138de elected leader f87838631c8138de at term 3"}
	{"level":"info","ts":"2023-08-17T22:01:01.908Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f87838631c8138de","local-member-attributes":"{Name:test-preload-269501 ClientURLs:[https://192.168.39.183:2379]}","request-path":"/0/members/f87838631c8138de/attributes","cluster-id":"2dc4003dc2fbf749","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T22:01:01.909Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T22:01:01.910Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T22:01:01.911Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-17T22:01:01.911Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.183:2379"}
	{"level":"info","ts":"2023-08-17T22:01:01.912Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T22:01:01.912Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  22:01:21 up 1 min,  0 users,  load average: 0.66, 0.23, 0.08
	Linux test-preload-269501 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [abc7d517f848132f6cdf4c72ce204d8e62730115c9c3d48db3bff36d6a090184] <==
	* I0817 22:01:04.441085       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0817 22:01:04.441103       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0817 22:01:04.441329       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0817 22:01:04.441361       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0817 22:01:04.451883       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0817 22:01:04.452511       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0817 22:01:04.579011       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0817 22:01:04.585067       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0817 22:01:04.620561       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0817 22:01:04.633049       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 22:01:04.636952       1 cache.go:39] Caches are synced for autoregister controller
	I0817 22:01:04.637014       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0817 22:01:04.637628       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 22:01:04.646099       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0817 22:01:04.646214       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0817 22:01:05.117472       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 22:01:05.441997       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0817 22:01:06.232535       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 22:01:06.242838       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 22:01:06.276109       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 22:01:06.307571       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 22:01:06.329344       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 22:01:07.390529       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0817 22:01:17.064829       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 22:01:17.137220       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [5e580423ab32c58c8afaf6f1954e860ce38463dd4b637179094f32e718dab290] <==
	* I0817 22:01:17.011999       1 shared_informer.go:262] Caches are synced for stateful set
	I0817 22:01:17.055733       1 shared_informer.go:262] Caches are synced for endpoint
	W0817 22:01:17.099011       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-269501" does not exist
	I0817 22:01:17.127030       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0817 22:01:17.134547       1 shared_informer.go:262] Caches are synced for persistent volume
	I0817 22:01:17.134966       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0817 22:01:17.137319       1 shared_informer.go:262] Caches are synced for attach detach
	I0817 22:01:17.138906       1 shared_informer.go:262] Caches are synced for taint
	I0817 22:01:17.139014       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0817 22:01:17.139118       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-269501. Assuming now as a timestamp.
	I0817 22:01:17.139161       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0817 22:01:17.141502       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0817 22:01:17.141780       1 event.go:294] "Event occurred" object="test-preload-269501" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-269501 event: Registered Node test-preload-269501 in Controller"
	I0817 22:01:17.147378       1 shared_informer.go:262] Caches are synced for GC
	I0817 22:01:17.153744       1 shared_informer.go:262] Caches are synced for TTL
	I0817 22:01:17.160523       1 shared_informer.go:262] Caches are synced for daemon sets
	I0817 22:01:17.176188       1 shared_informer.go:262] Caches are synced for node
	I0817 22:01:17.176316       1 range_allocator.go:173] Starting range CIDR allocator
	I0817 22:01:17.176341       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0817 22:01:17.176367       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0817 22:01:17.202502       1 shared_informer.go:262] Caches are synced for resource quota
	I0817 22:01:17.205720       1 shared_informer.go:262] Caches are synced for resource quota
	I0817 22:01:17.617068       1 shared_informer.go:262] Caches are synced for garbage collector
	I0817 22:01:17.617175       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 22:01:17.633940       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [ac80d55a7be6656a0998bb4eea999c2367a7c2f7beec12ac624c144f9ed45d99] <==
	* I0817 22:01:07.345872       1 node.go:163] Successfully retrieved node IP: 192.168.39.183
	I0817 22:01:07.345960       1 server_others.go:138] "Detected node IP" address="192.168.39.183"
	I0817 22:01:07.346013       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0817 22:01:07.382811       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0817 22:01:07.382879       1 server_others.go:206] "Using iptables Proxier"
	I0817 22:01:07.382915       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0817 22:01:07.383306       1 server.go:661] "Version info" version="v1.24.4"
	I0817 22:01:07.383347       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:01:07.384163       1 config.go:317] "Starting service config controller"
	I0817 22:01:07.384209       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0817 22:01:07.384239       1 config.go:226] "Starting endpoint slice config controller"
	I0817 22:01:07.384254       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0817 22:01:07.386945       1 config.go:444] "Starting node config controller"
	I0817 22:01:07.386985       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0817 22:01:07.484959       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0817 22:01:07.485006       1 shared_informer.go:262] Caches are synced for service config
	I0817 22:01:07.487069       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [17bbee20d03464616f98a6fb99c41aca2cf66ccb55c1e157592aa3f5d03c82ae] <==
	* I0817 22:01:01.138207       1 serving.go:348] Generated self-signed cert in-memory
	W0817 22:01:04.498257       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 22:01:04.498312       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 22:01:04.498325       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 22:01:04.498341       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 22:01:04.589756       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0817 22:01:04.589877       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:01:04.599885       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0817 22:01:04.600231       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0817 22:01:04.601266       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 22:01:04.601308       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 22:01:04.702529       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 22:00:20 UTC, ends at Thu 2023-08-17 22:01:21 UTC. --
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.460000    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4579ff2c-8863-4fe0-89e9-782e93ef090a-config-volume\") pod \"coredns-6d4b75cb6d-vdk7w\" (UID: \"4579ff2c-8863-4fe0-89e9-782e93ef090a\") " pod="kube-system/coredns-6d4b75cb6d-vdk7w"
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.460103    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4-kube-proxy\") pod \"kube-proxy-pj5r8\" (UID: \"2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4\") " pod="kube-system/kube-proxy-pj5r8"
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.460126    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4-lib-modules\") pod \"kube-proxy-pj5r8\" (UID: \"2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4\") " pod="kube-system/kube-proxy-pj5r8"
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.460146    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/62bf3a79-2a2d-4773-9110-8c7d9559295e-tmp\") pod \"storage-provisioner\" (UID: \"62bf3a79-2a2d-4773-9110-8c7d9559295e\") " pod="kube-system/storage-provisioner"
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.460164    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tzdz\" (UniqueName: \"kubernetes.io/projected/62bf3a79-2a2d-4773-9110-8c7d9559295e-kube-api-access-4tzdz\") pod \"storage-provisioner\" (UID: \"62bf3a79-2a2d-4773-9110-8c7d9559295e\") " pod="kube-system/storage-provisioner"
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.460188    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqpkk\" (UniqueName: \"kubernetes.io/projected/2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4-kube-api-access-hqpkk\") pod \"kube-proxy-pj5r8\" (UID: \"2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4\") " pod="kube-system/kube-proxy-pj5r8"
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.460212    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6pdr\" (UniqueName: \"kubernetes.io/projected/4579ff2c-8863-4fe0-89e9-782e93ef090a-kube-api-access-k6pdr\") pod \"coredns-6d4b75cb6d-vdk7w\" (UID: \"4579ff2c-8863-4fe0-89e9-782e93ef090a\") " pod="kube-system/coredns-6d4b75cb6d-vdk7w"
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.460235    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4-xtables-lock\") pod \"kube-proxy-pj5r8\" (UID: \"2ae4e18d-dc0c-40b8-96cd-83f4f54a4cb4\") " pod="kube-system/kube-proxy-pj5r8"
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.460256    1104 reconciler.go:159] "Reconciler: start to sync state"
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.875665    1104 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87ssm\" (UniqueName: \"kubernetes.io/projected/2e246c82-f058-48f6-b1c0-20429fa15324-kube-api-access-87ssm\") pod \"2e246c82-f058-48f6-b1c0-20429fa15324\" (UID: \"2e246c82-f058-48f6-b1c0-20429fa15324\") "
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.875763    1104 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e246c82-f058-48f6-b1c0-20429fa15324-config-volume\") pod \"2e246c82-f058-48f6-b1c0-20429fa15324\" (UID: \"2e246c82-f058-48f6-b1c0-20429fa15324\") "
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: E0817 22:01:05.876211    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: E0817 22:01:05.876449    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4579ff2c-8863-4fe0-89e9-782e93ef090a-config-volume podName:4579ff2c-8863-4fe0-89e9-782e93ef090a nodeName:}" failed. No retries permitted until 2023-08-17 22:01:06.376368779 +0000 UTC m=+9.133273666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4579ff2c-8863-4fe0-89e9-782e93ef090a-config-volume") pod "coredns-6d4b75cb6d-vdk7w" (UID: "4579ff2c-8863-4fe0-89e9-782e93ef090a") : object "kube-system"/"coredns" not registered
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: W0817 22:01:05.878610    1104 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/2e246c82-f058-48f6-b1c0-20429fa15324/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: W0817 22:01:05.878644    1104 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/2e246c82-f058-48f6-b1c0-20429fa15324/volumes/kubernetes.io~projected/kube-api-access-87ssm: clearQuota called, but quotas disabled
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.879087    1104 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e246c82-f058-48f6-b1c0-20429fa15324-kube-api-access-87ssm" (OuterVolumeSpecName: "kube-api-access-87ssm") pod "2e246c82-f058-48f6-b1c0-20429fa15324" (UID: "2e246c82-f058-48f6-b1c0-20429fa15324"). InnerVolumeSpecName "kube-api-access-87ssm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.879247    1104 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e246c82-f058-48f6-b1c0-20429fa15324-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e246c82-f058-48f6-b1c0-20429fa15324" (UID: "2e246c82-f058-48f6-b1c0-20429fa15324"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.976171    1104 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e246c82-f058-48f6-b1c0-20429fa15324-config-volume\") on node \"test-preload-269501\" DevicePath \"\""
	Aug 17 22:01:05 test-preload-269501 kubelet[1104]: I0817 22:01:05.976228    1104 reconciler.go:384] "Volume detached for volume \"kube-api-access-87ssm\" (UniqueName: \"kubernetes.io/projected/2e246c82-f058-48f6-b1c0-20429fa15324-kube-api-access-87ssm\") on node \"test-preload-269501\" DevicePath \"\""
	Aug 17 22:01:06 test-preload-269501 kubelet[1104]: E0817 22:01:06.379049    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 17 22:01:06 test-preload-269501 kubelet[1104]: E0817 22:01:06.379170    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4579ff2c-8863-4fe0-89e9-782e93ef090a-config-volume podName:4579ff2c-8863-4fe0-89e9-782e93ef090a nodeName:}" failed. No retries permitted until 2023-08-17 22:01:07.379090004 +0000 UTC m=+10.135994901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4579ff2c-8863-4fe0-89e9-782e93ef090a-config-volume") pod "coredns-6d4b75cb6d-vdk7w" (UID: "4579ff2c-8863-4fe0-89e9-782e93ef090a") : object "kube-system"/"coredns" not registered
	Aug 17 22:01:06 test-preload-269501 kubelet[1104]: E0817 22:01:06.518183    1104 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vdk7w" podUID=4579ff2c-8863-4fe0-89e9-782e93ef090a
	Aug 17 22:01:07 test-preload-269501 kubelet[1104]: E0817 22:01:07.387874    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 17 22:01:07 test-preload-269501 kubelet[1104]: E0817 22:01:07.387989    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4579ff2c-8863-4fe0-89e9-782e93ef090a-config-volume podName:4579ff2c-8863-4fe0-89e9-782e93ef090a nodeName:}" failed. No retries permitted until 2023-08-17 22:01:09.387974803 +0000 UTC m=+12.144879699 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4579ff2c-8863-4fe0-89e9-782e93ef090a-config-volume") pod "coredns-6d4b75cb6d-vdk7w" (UID: "4579ff2c-8863-4fe0-89e9-782e93ef090a") : object "kube-system"/"coredns" not registered
	Aug 17 22:01:07 test-preload-269501 kubelet[1104]: I0817 22:01:07.522796    1104 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2e246c82-f058-48f6-b1c0-20429fa15324 path="/var/lib/kubelet/pods/2e246c82-f058-48f6-b1c0-20429fa15324/volumes"
	
	* 
	* ==> storage-provisioner [a9607a46ca0ffefec77f5e535a1e85600d1660e178e61e18a51526b3177ef86d] <==
	* I0817 22:01:07.033770       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-269501 -n test-preload-269501
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-269501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-269501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-269501
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-269501: (1.128376971s)
--- FAIL: TestPreload (192.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (145.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.6.2.4165358255.exe start -p running-upgrade-552852 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.6.2.4165358255.exe start -p running-upgrade-552852 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m19.679763508s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-552852 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-552852 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (3.900505453s)

                                                
                                                
-- stdout --
	* [running-upgrade-552852] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-552852 in cluster running-upgrade-552852
	* Updating the running kvm2 "running-upgrade-552852" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 22:07:52.942466  238691 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:07:52.942623  238691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:07:52.942632  238691 out.go:309] Setting ErrFile to fd 2...
	I0817 22:07:52.942637  238691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:07:52.942855  238691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:07:52.943480  238691 out.go:303] Setting JSON to false
	I0817 22:07:52.944500  238691 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24598,"bootTime":1692285475,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:07:52.944587  238691 start.go:138] virtualization: kvm guest
	I0817 22:07:52.947129  238691 out.go:177] * [running-upgrade-552852] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:07:52.949233  238691 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:07:52.950660  238691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:07:52.949278  238691 notify.go:220] Checking for updates...
	I0817 22:07:52.953610  238691 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:07:52.955217  238691 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:07:52.956865  238691 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:07:52.958463  238691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:07:52.960393  238691 config.go:182] Loaded profile config "running-upgrade-552852": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0817 22:07:52.960424  238691 start_flags.go:683] config upgrade: Driver=kvm2
	I0817 22:07:52.960436  238691 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 22:07:52.960546  238691 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/running-upgrade-552852/config.json ...
	I0817 22:07:52.961280  238691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:07:52.961352  238691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:07:52.977676  238691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I0817 22:07:52.978191  238691 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:07:52.978971  238691 main.go:141] libmachine: Using API Version  1
	I0817 22:07:52.979003  238691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:07:52.979477  238691 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:07:52.979740  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .DriverName
	I0817 22:07:52.982134  238691 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0817 22:07:52.983694  238691 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:07:52.984085  238691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:07:52.984133  238691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:07:53.001237  238691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0817 22:07:53.001755  238691 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:07:53.002379  238691 main.go:141] libmachine: Using API Version  1
	I0817 22:07:53.002417  238691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:07:53.002800  238691 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:07:53.003007  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .DriverName
	I0817 22:07:53.042824  238691 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:07:53.044395  238691 start.go:298] selected driver: kvm2
	I0817 22:07:53.044417  238691 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-552852 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.207 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:07:53.044584  238691 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:07:53.045399  238691 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.045479  238691 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:07:53.062112  238691 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:07:53.062434  238691 cni.go:84] Creating CNI manager for ""
	I0817 22:07:53.062448  238691 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0817 22:07:53.062457  238691 start_flags.go:319] config:
	{Name:running-upgrade-552852 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.207 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:07:53.062620  238691 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.064518  238691 out.go:177] * Starting control plane node running-upgrade-552852 in cluster running-upgrade-552852
	I0817 22:07:53.066013  238691 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0817 22:07:53.092402  238691 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0817 22:07:53.092603  238691 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/running-upgrade-552852/config.json ...
	I0817 22:07:53.092954  238691 start.go:365] acquiring machines lock for running-upgrade-552852: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:07:53.093023  238691 start.go:369] acquired machines lock for "running-upgrade-552852" in 39.716µs
	I0817 22:07:53.093040  238691 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:07:53.093050  238691 fix.go:54] fixHost starting: minikube
	I0817 22:07:53.093043  238691 cache.go:107] acquiring lock: {Name:mk60bb30f2469cd306916ecb93512dbee64f157a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.093082  238691 cache.go:107] acquiring lock: {Name:mkc0a8fc3f41a464f15f1388dc97e3aaaf4d6666 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.093202  238691 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0817 22:07:53.093283  238691 cache.go:107] acquiring lock: {Name:mkf5fb5a40f52cfebfe4ee9261f4cab98cf0963e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.093362  238691 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0817 22:07:53.093532  238691 cache.go:107] acquiring lock: {Name:mk30b4a5ba9b9731d306c13efb8512a2d8ab8831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.093589  238691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:07:53.093624  238691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:07:53.093630  238691 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0817 22:07:53.093052  238691 cache.go:107] acquiring lock: {Name:mkfccbc5616754568160da08e29d3984a332661c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.093771  238691 cache.go:115] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0817 22:07:53.093779  238691 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 732.306µs
	I0817 22:07:53.093796  238691 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0817 22:07:53.093226  238691 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0817 22:07:53.093875  238691 cache.go:107] acquiring lock: {Name:mkec07708b49fda7926b4b4a3d419daebeb1085b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.093970  238691 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0817 22:07:53.093995  238691 cache.go:107] acquiring lock: {Name:mkd1d0126053b887c7a1de945a97410b09c5069f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.094123  238691 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0817 22:07:53.094132  238691 cache.go:107] acquiring lock: {Name:mk398ad0a6d21cc8ea91b4f08eceb205ff700544 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:07:53.094195  238691 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0817 22:07:53.095968  238691 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0817 22:07:53.096050  238691 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0817 22:07:53.096097  238691 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0817 22:07:53.096152  238691 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0817 22:07:53.096334  238691 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0817 22:07:53.096325  238691 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0817 22:07:53.096373  238691 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0817 22:07:53.112434  238691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I0817 22:07:53.112826  238691 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:07:53.113322  238691 main.go:141] libmachine: Using API Version  1
	I0817 22:07:53.113348  238691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:07:53.113719  238691 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:07:53.113908  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .DriverName
	I0817 22:07:53.114102  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetState
	I0817 22:07:53.115720  238691 fix.go:102] recreateIfNeeded on running-upgrade-552852: state=Running err=<nil>
	W0817 22:07:53.115756  238691 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:07:53.117752  238691 out.go:177] * Updating the running kvm2 "running-upgrade-552852" VM ...
	I0817 22:07:53.119135  238691 machine.go:88] provisioning docker machine ...
	I0817 22:07:53.119161  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .DriverName
	I0817 22:07:53.119391  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetMachineName
	I0817 22:07:53.119555  238691 buildroot.go:166] provisioning hostname "running-upgrade-552852"
	I0817 22:07:53.119578  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetMachineName
	I0817 22:07:53.119752  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHHostname
	I0817 22:07:53.122655  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.123175  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:53.123213  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.123373  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHPort
	I0817 22:07:53.123631  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:53.123796  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:53.123951  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHUsername
	I0817 22:07:53.124132  238691 main.go:141] libmachine: Using SSH client type: native
	I0817 22:07:53.124642  238691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.83.207 22 <nil> <nil>}
	I0817 22:07:53.124674  238691 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-552852 && echo "running-upgrade-552852" | sudo tee /etc/hostname
	I0817 22:07:53.257931  238691 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-552852
	
	I0817 22:07:53.257971  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHHostname
	I0817 22:07:53.261062  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.261504  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:53.261553  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.261755  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHPort
	I0817 22:07:53.261985  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:53.262211  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:53.262396  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHUsername
	I0817 22:07:53.262606  238691 main.go:141] libmachine: Using SSH client type: native
	I0817 22:07:53.263256  238691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.83.207 22 <nil> <nil>}
	I0817 22:07:53.263294  238691 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-552852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-552852/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-552852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:07:53.284709  238691 cache.go:162] opening:  /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0817 22:07:53.285461  238691 cache.go:162] opening:  /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0817 22:07:53.288231  238691 cache.go:162] opening:  /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0817 22:07:53.289954  238691 cache.go:162] opening:  /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0817 22:07:53.301541  238691 cache.go:162] opening:  /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0817 22:07:53.302474  238691 cache.go:162] opening:  /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0817 22:07:53.357389  238691 cache.go:162] opening:  /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0817 22:07:53.375134  238691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:07:53.375171  238691 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:07:53.375218  238691 buildroot.go:174] setting up certificates
	I0817 22:07:53.375233  238691 provision.go:83] configureAuth start
	I0817 22:07:53.375254  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetMachineName
	I0817 22:07:53.375578  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetIP
	I0817 22:07:53.380353  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHHostname
	I0817 22:07:53.380398  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.380471  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:53.380526  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.383342  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.383692  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:53.383773  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.384023  238691 provision.go:138] copyHostCerts
	I0817 22:07:53.384089  238691 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:07:53.384101  238691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:07:53.384782  238691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:07:53.384906  238691 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:07:53.384920  238691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:07:53.384946  238691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:07:53.385008  238691 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:07:53.385022  238691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:07:53.385049  238691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:07:53.385165  238691 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-552852 san=[192.168.83.207 192.168.83.207 localhost 127.0.0.1 minikube running-upgrade-552852]
	I0817 22:07:53.404731  238691 cache.go:157] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0817 22:07:53.404774  238691 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 310.650477ms
	I0817 22:07:53.404790  238691 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0817 22:07:53.846155  238691 cache.go:157] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0817 22:07:53.846429  238691 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 752.434802ms
	I0817 22:07:53.846492  238691 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0817 22:07:53.847808  238691 provision.go:172] copyRemoteCerts
	I0817 22:07:53.847874  238691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:07:53.847939  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHHostname
	I0817 22:07:53.851138  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.851552  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:53.851616  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:53.851897  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHPort
	I0817 22:07:53.852183  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:53.852349  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHUsername
	I0817 22:07:53.852504  238691 sshutil.go:53] new ssh client: &{IP:192.168.83.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/running-upgrade-552852/id_rsa Username:docker}
	I0817 22:07:53.950823  238691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 22:07:53.976129  238691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 22:07:54.021099  238691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:07:54.049789  238691 provision.go:86] duration metric: configureAuth took 674.534374ms
	I0817 22:07:54.049825  238691 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:07:54.050043  238691 config.go:182] Loaded profile config "running-upgrade-552852": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0817 22:07:54.050175  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHHostname
	I0817 22:07:54.053898  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.053935  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:54.053969  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.054026  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHPort
	I0817 22:07:54.054454  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:54.060285  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:54.061233  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHUsername
	I0817 22:07:54.061538  238691 main.go:141] libmachine: Using SSH client type: native
	I0817 22:07:54.062117  238691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.83.207 22 <nil> <nil>}
	I0817 22:07:54.062145  238691 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:07:54.232174  238691 cache.go:157] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0817 22:07:54.232213  238691 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.1391311s
	I0817 22:07:54.232232  238691 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0817 22:07:54.371473  238691 cache.go:157] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0817 22:07:54.371508  238691 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.278479344s
	I0817 22:07:54.371532  238691 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0817 22:07:54.396932  238691 cache.go:157] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0817 22:07:54.396966  238691 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.303443204s
	I0817 22:07:54.396983  238691 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0817 22:07:54.653760  238691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:07:54.653815  238691 machine.go:91] provisioned docker machine in 1.534663551s
	I0817 22:07:54.653828  238691 start.go:300] post-start starting for "running-upgrade-552852" (driver="kvm2")
	I0817 22:07:54.653850  238691 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:07:54.653905  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .DriverName
	I0817 22:07:54.654375  238691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:07:54.654420  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHHostname
	I0817 22:07:54.658170  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.659064  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:54.659099  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.659128  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHPort
	I0817 22:07:54.659378  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:54.659582  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHUsername
	I0817 22:07:54.659776  238691 sshutil.go:53] new ssh client: &{IP:192.168.83.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/running-upgrade-552852/id_rsa Username:docker}
	I0817 22:07:54.754629  238691 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:07:54.760284  238691 info.go:137] Remote host: Buildroot 2019.02.7
	I0817 22:07:54.760313  238691 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:07:54.760390  238691 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:07:54.760491  238691 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:07:54.760610  238691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:07:54.770326  238691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:07:54.776438  238691 cache.go:157] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0817 22:07:54.776471  238691 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.68260339s
	I0817 22:07:54.776485  238691 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0817 22:07:54.791547  238691 start.go:303] post-start completed in 137.697369ms
	I0817 22:07:54.791581  238691 fix.go:56] fixHost completed within 1.698531205s
	I0817 22:07:54.791613  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHHostname
	I0817 22:07:54.794689  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.795094  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:54.795135  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.795312  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHPort
	I0817 22:07:54.795560  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:54.795761  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:54.795931  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHUsername
	I0817 22:07:54.796138  238691 main.go:141] libmachine: Using SSH client type: native
	I0817 22:07:54.796562  238691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.83.207 22 <nil> <nil>}
	I0817 22:07:54.796578  238691 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0817 22:07:54.927106  238691 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692310074.923769349
	
	I0817 22:07:54.927164  238691 fix.go:206] guest clock: 1692310074.923769349
	I0817 22:07:54.927176  238691 fix.go:219] Guest: 2023-08-17 22:07:54.923769349 +0000 UTC Remote: 2023-08-17 22:07:54.791586064 +0000 UTC m=+1.888742648 (delta=132.183285ms)
	I0817 22:07:54.927203  238691 fix.go:190] guest clock delta is within tolerance: 132.183285ms
	I0817 22:07:54.927209  238691 start.go:83] releasing machines lock for "running-upgrade-552852", held for 1.834175911s
	I0817 22:07:54.927233  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .DriverName
	I0817 22:07:54.927522  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetIP
	I0817 22:07:54.930030  238691 cache.go:157] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0817 22:07:54.930107  238691 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.836830762s
	I0817 22:07:54.930128  238691 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0817 22:07:54.930145  238691 cache.go:87] Successfully saved all images to host disk.
	I0817 22:07:54.930894  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.931300  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:54.931330  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.931551  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .DriverName
	I0817 22:07:54.932785  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .DriverName
	I0817 22:07:54.932996  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .DriverName
	I0817 22:07:54.933097  238691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:07:54.933162  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHHostname
	I0817 22:07:54.933236  238691 ssh_runner.go:195] Run: cat /version.json
	I0817 22:07:54.933265  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHHostname
	I0817 22:07:54.936435  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.936744  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.936773  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:54.936808  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.936995  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHPort
	I0817 22:07:54.937195  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:54.937271  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:4b:30", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:06:10 +0000 UTC Type:0 Mac:52:54:00:cf:4b:30 Iaid: IPaddr:192.168.83.207 Prefix:24 Hostname:running-upgrade-552852 Clientid:01:52:54:00:cf:4b:30}
	I0817 22:07:54.937294  238691 main.go:141] libmachine: (running-upgrade-552852) DBG | domain running-upgrade-552852 has defined IP address 192.168.83.207 and MAC address 52:54:00:cf:4b:30 in network minikube-net
	I0817 22:07:54.937402  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHUsername
	I0817 22:07:54.937476  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHPort
	I0817 22:07:54.937556  238691 sshutil.go:53] new ssh client: &{IP:192.168.83.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/running-upgrade-552852/id_rsa Username:docker}
	I0817 22:07:54.938178  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHKeyPath
	I0817 22:07:54.938370  238691 main.go:141] libmachine: (running-upgrade-552852) Calling .GetSSHUsername
	I0817 22:07:54.938592  238691 sshutil.go:53] new ssh client: &{IP:192.168.83.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/running-upgrade-552852/id_rsa Username:docker}
	W0817 22:07:55.051080  238691 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0817 22:07:55.051188  238691 ssh_runner.go:195] Run: systemctl --version
	I0817 22:07:55.058428  238691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:07:55.207131  238691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:07:55.214153  238691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:07:55.214243  238691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:07:55.220455  238691 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0817 22:07:55.220486  238691 start.go:466] detecting cgroup driver to use...
	I0817 22:07:55.220566  238691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:07:55.232012  238691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:07:55.252452  238691 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:07:55.252530  238691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:07:55.266259  238691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:07:55.278577  238691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0817 22:07:55.289584  238691 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0817 22:07:55.289657  238691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:07:55.420567  238691 docker.go:212] disabling docker service ...
	I0817 22:07:55.420672  238691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:07:56.442328  238691 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.021611009s)
	I0817 22:07:56.442436  238691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:07:56.456445  238691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:07:56.564455  238691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:07:56.735997  238691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:07:56.748784  238691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:07:56.772546  238691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0817 22:07:56.772656  238691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:07:56.787722  238691 out.go:177] 
	W0817 22:07:56.789731  238691 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0817 22:07:56.789759  238691 out.go:239] * 
	* 
	W0817 22:07:56.790825  238691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 22:07:56.794024  238691 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-552852 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-17 22:07:56.813811132 +0000 UTC m=+3450.454578850
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-552852 -n running-upgrade-552852
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-552852 -n running-upgrade-552852: exit status 4 (270.000003ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:07:57.048133  238856 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-552852" does not appear in /home/jenkins/minikube-integration/16865-203458/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-552852" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-552852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-552852
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-552852: (1.278116743s)
--- FAIL: TestRunningBinaryUpgrade (145.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (335.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.6.2.2559739143.exe start -p stopped-upgrade-717933 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.6.2.2559739143.exe start -p stopped-upgrade-717933 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m24.556263112s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.6.2.2559739143.exe -p stopped-upgrade-717933 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.6.2.2559739143.exe -p stopped-upgrade-717933 stop: (1m32.554559305s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-717933 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0817 22:12:07.553209  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-717933 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m38.361480663s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-717933] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-717933 in cluster stopped-upgrade-717933
	* Restarting existing kvm2 VM for "stopped-upgrade-717933" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 22:11:32.467160  243993 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:11:32.467307  243993 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:11:32.467317  243993 out.go:309] Setting ErrFile to fd 2...
	I0817 22:11:32.467321  243993 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:11:32.467514  243993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:11:32.468101  243993 out.go:303] Setting JSON to false
	I0817 22:11:32.469099  243993 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24818,"bootTime":1692285475,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:11:32.469167  243993 start.go:138] virtualization: kvm guest
	I0817 22:11:32.471480  243993 out.go:177] * [stopped-upgrade-717933] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:11:32.473626  243993 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:11:32.473690  243993 notify.go:220] Checking for updates...
	I0817 22:11:32.475311  243993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:11:32.476996  243993 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:11:32.478581  243993 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:11:32.480182  243993 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:11:32.481685  243993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:11:32.484298  243993 config.go:182] Loaded profile config "stopped-upgrade-717933": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0817 22:11:32.484339  243993 start_flags.go:683] config upgrade: Driver=kvm2
	I0817 22:11:32.484353  243993 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 22:11:32.484458  243993 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/stopped-upgrade-717933/config.json ...
	I0817 22:11:32.485034  243993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:11:32.485096  243993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:11:32.500253  243993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I0817 22:11:32.500823  243993 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:11:32.501547  243993 main.go:141] libmachine: Using API Version  1
	I0817 22:11:32.501572  243993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:11:32.501899  243993 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:11:32.502108  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	I0817 22:11:32.504483  243993 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0817 22:11:32.505983  243993 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:11:32.506312  243993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:11:32.506358  243993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:11:32.521927  243993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38129
	I0817 22:11:32.522379  243993 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:11:32.523015  243993 main.go:141] libmachine: Using API Version  1
	I0817 22:11:32.523067  243993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:11:32.523447  243993 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:11:32.523678  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	I0817 22:11:32.563222  243993 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:11:32.564738  243993 start.go:298] selected driver: kvm2
	I0817 22:11:32.564752  243993 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-717933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.127 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:11:32.564890  243993 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:11:32.565606  243993 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.565703  243993 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:11:32.581752  243993 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:11:32.582176  243993 cni.go:84] Creating CNI manager for ""
	I0817 22:11:32.582198  243993 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0817 22:11:32.582214  243993 start_flags.go:319] config:
	{Name:stopped-upgrade-717933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.127 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:11:32.582474  243993 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.584444  243993 out.go:177] * Starting control plane node stopped-upgrade-717933 in cluster stopped-upgrade-717933
	I0817 22:11:32.585727  243993 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0817 22:11:32.609106  243993 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0817 22:11:32.609255  243993 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/stopped-upgrade-717933/config.json ...
	I0817 22:11:32.609400  243993 cache.go:107] acquiring lock: {Name:mk60bb30f2469cd306916ecb93512dbee64f157a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.609416  243993 cache.go:107] acquiring lock: {Name:mk30b4a5ba9b9731d306c13efb8512a2d8ab8831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.609450  243993 cache.go:107] acquiring lock: {Name:mkec07708b49fda7926b4b4a3d419daebeb1085b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.609407  243993 cache.go:107] acquiring lock: {Name:mkfccbc5616754568160da08e29d3984a332661c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.609543  243993 cache.go:115] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0817 22:11:32.609544  243993 cache.go:115] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0817 22:11:32.609563  243993 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 156.654µs
	I0817 22:11:32.609571  243993 cache.go:115] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0817 22:11:32.609573  243993 start.go:365] acquiring machines lock for stopped-upgrade-717933: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:11:32.609590  243993 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0817 22:11:32.609590  243993 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 190.962µs
	I0817 22:11:32.609606  243993 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0817 22:11:32.609581  243993 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 186.913µs
	I0817 22:11:32.609545  243993 cache.go:115] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0817 22:11:32.609646  243993 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0817 22:11:32.609465  243993 cache.go:107] acquiring lock: {Name:mk398ad0a6d21cc8ea91b4f08eceb205ff700544 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.609658  243993 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 210.212µs
	I0817 22:11:32.609673  243993 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0817 22:11:32.609657  243993 cache.go:107] acquiring lock: {Name:mkd1d0126053b887c7a1de945a97410b09c5069f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.609687  243993 cache.go:107] acquiring lock: {Name:mkc0a8fc3f41a464f15f1388dc97e3aaaf4d6666 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.609768  243993 cache.go:115] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0817 22:11:32.609787  243993 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 176.151µs
	I0817 22:11:32.609811  243993 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0817 22:11:32.609717  243993 cache.go:115] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0817 22:11:32.609833  243993 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 398.324µs
	I0817 22:11:32.609844  243993 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0817 22:11:32.609848  243993 cache.go:115] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0817 22:11:32.609878  243993 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 273.001µs
	I0817 22:11:32.609905  243993 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0817 22:11:32.610048  243993 cache.go:107] acquiring lock: {Name:mkf5fb5a40f52cfebfe4ee9261f4cab98cf0963e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:11:32.610188  243993 cache.go:115] /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0817 22:11:32.610200  243993 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 218.903µs
	I0817 22:11:32.610213  243993 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0817 22:11:32.610226  243993 cache.go:87] Successfully saved all images to host disk.
	I0817 22:12:23.827993  243993 start.go:369] acquired machines lock for "stopped-upgrade-717933" in 51.218372807s
	I0817 22:12:23.828042  243993 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:12:23.828049  243993 fix.go:54] fixHost starting: minikube
	I0817 22:12:23.828527  243993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:12:23.828588  243993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:12:23.847986  243993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0817 22:12:23.848450  243993 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:12:23.849047  243993 main.go:141] libmachine: Using API Version  1
	I0817 22:12:23.849077  243993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:12:23.849412  243993 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:12:23.849638  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	I0817 22:12:23.849824  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetState
	I0817 22:12:23.851648  243993 fix.go:102] recreateIfNeeded on stopped-upgrade-717933: state=Stopped err=<nil>
	I0817 22:12:23.851679  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	W0817 22:12:23.851837  243993 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:12:23.854231  243993 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-717933" ...
	I0817 22:12:23.855968  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .Start
	I0817 22:12:23.856174  243993 main.go:141] libmachine: (stopped-upgrade-717933) Ensuring networks are active...
	I0817 22:12:23.857101  243993 main.go:141] libmachine: (stopped-upgrade-717933) Ensuring network default is active
	I0817 22:12:23.857667  243993 main.go:141] libmachine: (stopped-upgrade-717933) Ensuring network minikube-net is active
	I0817 22:12:23.858092  243993 main.go:141] libmachine: (stopped-upgrade-717933) Getting domain xml...
	I0817 22:12:23.858921  243993 main.go:141] libmachine: (stopped-upgrade-717933) Creating domain...
	I0817 22:12:25.365960  243993 main.go:141] libmachine: (stopped-upgrade-717933) Waiting to get IP...
	I0817 22:12:25.367392  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:25.368093  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:25.368200  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:25.368090  244488 retry.go:31] will retry after 202.119888ms: waiting for machine to come up
	I0817 22:12:25.572062  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:25.572982  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:25.573183  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:25.573099  244488 retry.go:31] will retry after 300.33391ms: waiting for machine to come up
	I0817 22:12:25.874722  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:25.875311  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:25.875345  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:25.875241  244488 retry.go:31] will retry after 409.088672ms: waiting for machine to come up
	I0817 22:12:26.285754  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:26.286648  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:26.286678  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:26.286334  244488 retry.go:31] will retry after 390.038961ms: waiting for machine to come up
	I0817 22:12:26.678020  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:26.678701  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:26.678730  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:26.678593  244488 retry.go:31] will retry after 613.150515ms: waiting for machine to come up
	I0817 22:12:27.293978  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:27.294703  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:27.294729  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:27.294605  244488 retry.go:31] will retry after 618.261737ms: waiting for machine to come up
	I0817 22:12:27.914182  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:27.914793  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:27.914825  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:27.914753  244488 retry.go:31] will retry after 920.422228ms: waiting for machine to come up
	I0817 22:12:28.836551  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:28.837419  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:28.837445  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:28.837323  244488 retry.go:31] will retry after 1.483280517s: waiting for machine to come up
	I0817 22:12:30.321920  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:30.322705  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:30.322751  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:30.322604  244488 retry.go:31] will retry after 1.258566378s: waiting for machine to come up
	I0817 22:12:31.582565  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:31.583052  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:31.583082  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:31.583002  244488 retry.go:31] will retry after 1.531160764s: waiting for machine to come up
	I0817 22:12:33.117736  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:33.118409  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:33.118444  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:33.118317  244488 retry.go:31] will retry after 2.001655635s: waiting for machine to come up
	I0817 22:12:35.121500  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:35.122133  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:35.122162  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:35.122043  244488 retry.go:31] will retry after 2.892077755s: waiting for machine to come up
	I0817 22:12:38.015625  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:38.016178  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:38.016206  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:38.016153  244488 retry.go:31] will retry after 4.056771926s: waiting for machine to come up
	I0817 22:12:42.075448  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:42.075912  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:42.075942  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:42.075862  244488 retry.go:31] will retry after 5.090256975s: waiting for machine to come up
	I0817 22:12:47.168299  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:47.168896  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:47.168920  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:47.168844  244488 retry.go:31] will retry after 6.492058261s: waiting for machine to come up
	I0817 22:12:53.663360  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:12:53.663843  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | unable to find current IP address of domain stopped-upgrade-717933 in network minikube-net
	I0817 22:12:53.663868  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | I0817 22:12:53.663793  244488 retry.go:31] will retry after 8.318816199s: waiting for machine to come up
	I0817 22:13:01.983708  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:01.984207  243993 main.go:141] libmachine: (stopped-upgrade-717933) Found IP for machine: 192.168.83.127
	I0817 22:13:01.984237  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has current primary IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:01.984258  243993 main.go:141] libmachine: (stopped-upgrade-717933) Reserving static IP address...
	I0817 22:13:01.984862  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "stopped-upgrade-717933", mac: "52:54:00:51:db:4e", ip: "192.168.83.127"} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:01.984910  243993 main.go:141] libmachine: (stopped-upgrade-717933) Reserved static IP address: 192.168.83.127
	I0817 22:13:01.984931  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-717933", mac: "52:54:00:51:db:4e", ip: "192.168.83.127"}
	I0817 22:13:01.984948  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | Getting to WaitForSSH function...
	I0817 22:13:01.984962  243993 main.go:141] libmachine: (stopped-upgrade-717933) Waiting for SSH to be available...
	I0817 22:13:01.987378  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:01.987734  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:01.987778  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:01.987830  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | Using SSH client type: external
	I0817 22:13:01.987872  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/stopped-upgrade-717933/id_rsa (-rw-------)
	I0817 22:13:01.987916  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/stopped-upgrade-717933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:13:01.987941  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | About to run SSH command:
	I0817 22:13:01.987951  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | exit 0
	I0817 22:13:02.125739  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | SSH cmd err, output: <nil>: 
	I0817 22:13:02.126135  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetConfigRaw
	I0817 22:13:02.126748  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetIP
	I0817 22:13:02.129711  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.130128  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:02.130169  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.130377  243993 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/stopped-upgrade-717933/config.json ...
	I0817 22:13:02.130564  243993 machine.go:88] provisioning docker machine ...
	I0817 22:13:02.130588  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	I0817 22:13:02.130807  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetMachineName
	I0817 22:13:02.130989  243993 buildroot.go:166] provisioning hostname "stopped-upgrade-717933"
	I0817 22:13:02.131013  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetMachineName
	I0817 22:13:02.131167  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHHostname
	I0817 22:13:02.133537  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.133905  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:02.133941  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.134087  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHPort
	I0817 22:13:02.134275  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:02.134453  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:02.134598  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHUsername
	I0817 22:13:02.134761  243993 main.go:141] libmachine: Using SSH client type: native
	I0817 22:13:02.135227  243993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.83.127 22 <nil> <nil>}
	I0817 22:13:02.135242  243993 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-717933 && echo "stopped-upgrade-717933" | sudo tee /etc/hostname
	I0817 22:13:02.278669  243993 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-717933
	
	I0817 22:13:02.278702  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHHostname
	I0817 22:13:02.282006  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.282459  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:02.282507  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.282715  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHPort
	I0817 22:13:02.282935  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:02.283097  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:02.283237  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHUsername
	I0817 22:13:02.283401  243993 main.go:141] libmachine: Using SSH client type: native
	I0817 22:13:02.283828  243993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.83.127 22 <nil> <nil>}
	I0817 22:13:02.283857  243993 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-717933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-717933/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-717933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:13:02.421490  243993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:13:02.421517  243993 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:13:02.421542  243993 buildroot.go:174] setting up certificates
	I0817 22:13:02.421562  243993 provision.go:83] configureAuth start
	I0817 22:13:02.421580  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetMachineName
	I0817 22:13:02.421855  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetIP
	I0817 22:13:02.424853  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.425264  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:02.425298  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.425491  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHHostname
	I0817 22:13:02.428242  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.428640  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:02.428677  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.428873  243993 provision.go:138] copyHostCerts
	I0817 22:13:02.428935  243993 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:13:02.428947  243993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:13:02.429022  243993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:13:02.429113  243993 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:13:02.429122  243993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:13:02.429147  243993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:13:02.429250  243993 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:13:02.429262  243993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:13:02.429287  243993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:13:02.429329  243993 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-717933 san=[192.168.83.127 192.168.83.127 localhost 127.0.0.1 minikube stopped-upgrade-717933]
	I0817 22:13:02.491264  243993 provision.go:172] copyRemoteCerts
	I0817 22:13:02.491324  243993 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:13:02.491351  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHHostname
	I0817 22:13:02.494614  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.495086  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:02.495129  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.495441  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHPort
	I0817 22:13:02.495652  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:02.495872  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHUsername
	I0817 22:13:02.496013  243993 sshutil.go:53] new ssh client: &{IP:192.168.83.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/stopped-upgrade-717933/id_rsa Username:docker}
	I0817 22:13:02.588843  243993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:13:02.604013  243993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 22:13:02.619735  243993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 22:13:02.633946  243993 provision.go:86] duration metric: configureAuth took 212.364056ms
	I0817 22:13:02.633990  243993 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:13:02.634282  243993 config.go:182] Loaded profile config "stopped-upgrade-717933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0817 22:13:02.634405  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHHostname
	I0817 22:13:02.637864  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.638267  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:02.638303  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:02.638536  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHPort
	I0817 22:13:02.638766  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:02.638962  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:02.639136  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHUsername
	I0817 22:13:02.639312  243993 main.go:141] libmachine: Using SSH client type: native
	I0817 22:13:02.639940  243993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.83.127 22 <nil> <nil>}
	I0817 22:13:02.639975  243993 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:13:09.832451  243993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:13:09.832483  243993 machine.go:91] provisioned docker machine in 7.701907293s
	I0817 22:13:09.832498  243993 start.go:300] post-start starting for "stopped-upgrade-717933" (driver="kvm2")
	I0817 22:13:09.832513  243993 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:13:09.832536  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	I0817 22:13:09.832930  243993 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:13:09.832970  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHHostname
	I0817 22:13:09.835766  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:09.836207  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:09.836242  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:09.836493  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHPort
	I0817 22:13:09.836697  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:09.836858  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHUsername
	I0817 22:13:09.837007  243993 sshutil.go:53] new ssh client: &{IP:192.168.83.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/stopped-upgrade-717933/id_rsa Username:docker}
	I0817 22:13:09.930721  243993 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:13:09.935846  243993 info.go:137] Remote host: Buildroot 2019.02.7
	I0817 22:13:09.935878  243993 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:13:09.935975  243993 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:13:09.936071  243993 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:13:09.936169  243993 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:13:09.942816  243993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:13:09.957466  243993 start.go:303] post-start completed in 124.945635ms
	I0817 22:13:09.957505  243993 fix.go:56] fixHost completed within 46.12945549s
	I0817 22:13:09.957540  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHHostname
	I0817 22:13:09.960708  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:09.961120  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:09.961147  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:09.961411  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHPort
	I0817 22:13:09.961631  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:09.961857  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:09.962008  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHUsername
	I0817 22:13:09.962210  243993 main.go:141] libmachine: Using SSH client type: native
	I0817 22:13:09.962696  243993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.83.127 22 <nil> <nil>}
	I0817 22:13:09.962714  243993 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0817 22:13:10.090945  243993 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692310390.019426070
	
	I0817 22:13:10.090970  243993 fix.go:206] guest clock: 1692310390.019426070
	I0817 22:13:10.090977  243993 fix.go:219] Guest: 2023-08-17 22:13:10.01942607 +0000 UTC Remote: 2023-08-17 22:13:09.957510332 +0000 UTC m=+97.532428196 (delta=61.915738ms)
	I0817 22:13:10.090998  243993 fix.go:190] guest clock delta is within tolerance: 61.915738ms
	I0817 22:13:10.091003  243993 start.go:83] releasing machines lock for "stopped-upgrade-717933", held for 46.262983844s
	I0817 22:13:10.091035  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	I0817 22:13:10.091325  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetIP
	I0817 22:13:10.094161  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:10.094517  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:10.094552  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:10.094712  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	I0817 22:13:10.095324  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	I0817 22:13:10.095532  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .DriverName
	I0817 22:13:10.095625  243993 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:13:10.095680  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHHostname
	I0817 22:13:10.095789  243993 ssh_runner.go:195] Run: cat /version.json
	I0817 22:13:10.095818  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHHostname
	I0817 22:13:10.098881  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:10.099059  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:10.099402  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:10.099432  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:10.099600  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHPort
	I0817 22:13:10.099604  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:db:4e", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-08-17 23:12:56 +0000 UTC Type:0 Mac:52:54:00:51:db:4e Iaid: IPaddr:192.168.83.127 Prefix:24 Hostname:stopped-upgrade-717933 Clientid:01:52:54:00:51:db:4e}
	I0817 22:13:10.099633  243993 main.go:141] libmachine: (stopped-upgrade-717933) DBG | domain stopped-upgrade-717933 has defined IP address 192.168.83.127 and MAC address 52:54:00:51:db:4e in network minikube-net
	I0817 22:13:10.099785  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHPort
	I0817 22:13:10.099819  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:10.099935  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHKeyPath
	I0817 22:13:10.099978  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHUsername
	I0817 22:13:10.100097  243993 main.go:141] libmachine: (stopped-upgrade-717933) Calling .GetSSHUsername
	I0817 22:13:10.100176  243993 sshutil.go:53] new ssh client: &{IP:192.168.83.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/stopped-upgrade-717933/id_rsa Username:docker}
	I0817 22:13:10.100202  243993 sshutil.go:53] new ssh client: &{IP:192.168.83.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/stopped-upgrade-717933/id_rsa Username:docker}
	W0817 22:13:10.218023  243993 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0817 22:13:10.218132  243993 ssh_runner.go:195] Run: systemctl --version
	I0817 22:13:10.222897  243993 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:13:10.382469  243993 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:13:10.389653  243993 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:13:10.389740  243993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:13:10.395302  243993 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0817 22:13:10.395335  243993 start.go:466] detecting cgroup driver to use...
	I0817 22:13:10.395413  243993 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:13:10.405644  243993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:13:10.414533  243993 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:13:10.414606  243993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:13:10.422506  243993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:13:10.430732  243993 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0817 22:13:10.439769  243993 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0817 22:13:10.439842  243993 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:13:10.534656  243993 docker.go:212] disabling docker service ...
	I0817 22:13:10.534737  243993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:13:10.549142  243993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:13:10.557840  243993 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:13:10.645093  243993 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:13:10.740595  243993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:13:10.750601  243993 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:13:10.763863  243993 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0817 22:13:10.763929  243993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:13:10.772981  243993 out.go:177] 
	W0817 22:13:10.774893  243993 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0817 22:13:10.774917  243993 out.go:239] * 
	* 
	W0817 22:13:10.775796  243993 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 22:13:10.777898  243993 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-717933 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (335.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-294781 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-294781 --alsologtostderr -v=3: exit status 82 (2m1.591013655s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-294781"  ...
	* Stopping node "old-k8s-version-294781"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 22:16:44.313035  253541 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:16:44.313169  253541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:16:44.313181  253541 out.go:309] Setting ErrFile to fd 2...
	I0817 22:16:44.313188  253541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:16:44.313428  253541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:16:44.313716  253541 out.go:303] Setting JSON to false
	I0817 22:16:44.313803  253541 mustload.go:65] Loading cluster: old-k8s-version-294781
	I0817 22:16:44.314174  253541 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:16:44.314282  253541 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/config.json ...
	I0817 22:16:44.314459  253541 mustload.go:65] Loading cluster: old-k8s-version-294781
	I0817 22:16:44.314602  253541 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:16:44.314650  253541 stop.go:39] StopHost: old-k8s-version-294781
	I0817 22:16:44.314995  253541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:16:44.315060  253541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:16:44.332424  253541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I0817 22:16:44.333029  253541 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:16:44.333847  253541 main.go:141] libmachine: Using API Version  1
	I0817 22:16:44.333880  253541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:16:44.334270  253541 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:16:44.338348  253541 out.go:177] * Stopping node "old-k8s-version-294781"  ...
	I0817 22:16:44.339964  253541 main.go:141] libmachine: Stopping "old-k8s-version-294781"...
	I0817 22:16:44.340006  253541 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:16:44.341839  253541 main.go:141] libmachine: (old-k8s-version-294781) Calling .Stop
	I0817 22:16:44.345586  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 0/60
	I0817 22:16:45.347941  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 1/60
	I0817 22:16:46.349380  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 2/60
	I0817 22:16:47.351002  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 3/60
	I0817 22:16:48.352829  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 4/60
	I0817 22:16:49.354934  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 5/60
	I0817 22:16:50.356837  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 6/60
	I0817 22:16:51.359318  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 7/60
	I0817 22:16:52.360983  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 8/60
	I0817 22:16:53.362508  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 9/60
	I0817 22:16:54.364826  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 10/60
	I0817 22:16:55.366873  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 11/60
	I0817 22:16:56.369424  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 12/60
	I0817 22:16:57.370818  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 13/60
	I0817 22:16:58.372380  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 14/60
	I0817 22:16:59.374597  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 15/60
	I0817 22:17:00.376713  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 16/60
	I0817 22:17:01.378221  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 17/60
	I0817 22:17:02.379546  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 18/60
	I0817 22:17:03.381088  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 19/60
	I0817 22:17:04.382653  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 20/60
	I0817 22:17:05.384587  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 21/60
	I0817 22:17:06.386199  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 22/60
	I0817 22:17:07.387554  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 23/60
	I0817 22:17:08.389059  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 24/60
	I0817 22:17:09.391231  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 25/60
	I0817 22:17:10.392741  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 26/60
	I0817 22:17:11.394071  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 27/60
	I0817 22:17:12.395513  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 28/60
	I0817 22:17:13.396781  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 29/60
	I0817 22:17:14.399093  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 30/60
	I0817 22:17:15.400699  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 31/60
	I0817 22:17:16.403019  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 32/60
	I0817 22:17:17.404632  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 33/60
	I0817 22:17:18.405766  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 34/60
	I0817 22:17:19.407252  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 35/60
	I0817 22:17:20.408547  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 36/60
	I0817 22:17:21.409762  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 37/60
	I0817 22:17:22.411004  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 38/60
	I0817 22:17:23.412283  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 39/60
	I0817 22:17:24.414704  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 40/60
	I0817 22:17:25.415958  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 41/60
	I0817 22:17:26.417594  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 42/60
	I0817 22:17:27.419031  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 43/60
	I0817 22:17:28.420728  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 44/60
	I0817 22:17:29.422754  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 45/60
	I0817 22:17:30.424660  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 46/60
	I0817 22:17:31.426043  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 47/60
	I0817 22:17:32.427253  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 48/60
	I0817 22:17:33.428499  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 49/60
	I0817 22:17:34.429890  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 50/60
	I0817 22:17:35.432287  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 51/60
	I0817 22:17:36.434081  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 52/60
	I0817 22:17:37.435534  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 53/60
	I0817 22:17:38.437544  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 54/60
	I0817 22:17:39.439571  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 55/60
	I0817 22:17:40.440945  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 56/60
	I0817 22:17:41.442685  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 57/60
	I0817 22:17:42.444238  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 58/60
	I0817 22:17:43.446646  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 59/60
	I0817 22:17:44.447775  253541 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0817 22:17:44.447851  253541 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:17:44.447871  253541 retry.go:31] will retry after 1.278745207s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:17:45.727303  253541 stop.go:39] StopHost: old-k8s-version-294781
	I0817 22:17:45.727756  253541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:17:45.727881  253541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:17:45.742958  253541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37951
	I0817 22:17:45.743439  253541 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:17:45.743955  253541 main.go:141] libmachine: Using API Version  1
	I0817 22:17:45.743983  253541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:17:45.744352  253541 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:17:45.746461  253541 out.go:177] * Stopping node "old-k8s-version-294781"  ...
	I0817 22:17:45.748085  253541 main.go:141] libmachine: Stopping "old-k8s-version-294781"...
	I0817 22:17:45.748105  253541 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:17:45.749846  253541 main.go:141] libmachine: (old-k8s-version-294781) Calling .Stop
	I0817 22:17:45.753097  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 0/60
	I0817 22:17:46.754624  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 1/60
	I0817 22:17:47.756176  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 2/60
	I0817 22:17:48.757770  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 3/60
	I0817 22:17:49.759364  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 4/60
	I0817 22:17:50.761390  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 5/60
	I0817 22:17:51.762998  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 6/60
	I0817 22:17:52.764590  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 7/60
	I0817 22:17:53.765981  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 8/60
	I0817 22:17:54.767400  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 9/60
	I0817 22:17:55.769600  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 10/60
	I0817 22:17:56.771056  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 11/60
	I0817 22:17:57.772694  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 12/60
	I0817 22:17:58.774158  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 13/60
	I0817 22:17:59.775822  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 14/60
	I0817 22:18:00.777466  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 15/60
	I0817 22:18:01.778887  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 16/60
	I0817 22:18:02.780375  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 17/60
	I0817 22:18:03.781749  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 18/60
	I0817 22:18:04.783314  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 19/60
	I0817 22:18:05.784942  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 20/60
	I0817 22:18:06.786525  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 21/60
	I0817 22:18:07.788509  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 22/60
	I0817 22:18:08.790048  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 23/60
	I0817 22:18:09.791609  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 24/60
	I0817 22:18:10.793283  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 25/60
	I0817 22:18:11.794677  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 26/60
	I0817 22:18:12.796256  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 27/60
	I0817 22:18:13.797834  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 28/60
	I0817 22:18:14.799325  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 29/60
	I0817 22:18:15.801051  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 30/60
	I0817 22:18:16.802590  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 31/60
	I0817 22:18:17.803985  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 32/60
	I0817 22:18:18.805609  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 33/60
	I0817 22:18:19.807017  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 34/60
	I0817 22:18:20.808828  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 35/60
	I0817 22:18:21.810358  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 36/60
	I0817 22:18:22.811763  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 37/60
	I0817 22:18:23.813271  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 38/60
	I0817 22:18:24.814679  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 39/60
	I0817 22:18:25.816122  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 40/60
	I0817 22:18:26.817847  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 41/60
	I0817 22:18:27.819286  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 42/60
	I0817 22:18:28.820685  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 43/60
	I0817 22:18:29.822151  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 44/60
	I0817 22:18:30.823853  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 45/60
	I0817 22:18:31.825257  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 46/60
	I0817 22:18:32.826685  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 47/60
	I0817 22:18:33.828341  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 48/60
	I0817 22:18:34.829662  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 49/60
	I0817 22:18:35.831736  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 50/60
	I0817 22:18:36.833387  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 51/60
	I0817 22:18:37.835058  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 52/60
	I0817 22:18:38.836790  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 53/60
	I0817 22:18:39.838325  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 54/60
	I0817 22:18:40.840282  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 55/60
	I0817 22:18:41.842015  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 56/60
	I0817 22:18:42.843526  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 57/60
	I0817 22:18:43.844848  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 58/60
	I0817 22:18:44.846426  253541 main.go:141] libmachine: (old-k8s-version-294781) Waiting for machine to stop 59/60
	I0817 22:18:45.846912  253541 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0817 22:18:45.846963  253541 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:18:45.849376  253541 out.go:177] 
	W0817 22:18:45.851069  253541 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0817 22:18:45.851087  253541 out.go:239] * 
	* 
	W0817 22:18:45.855428  253541 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 22:18:45.857192  253541 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-294781 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-294781 -n old-k8s-version-294781
E0817 22:18:48.088908  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-294781 -n old-k8s-version-294781: exit status 3 (18.523274867s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:19:04.382455  254673 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.56:22: connect: no route to host
	E0817 22:19:04.382482  254673 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.56:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-294781" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-525875 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-525875 --alsologtostderr -v=3: exit status 82 (2m1.373664389s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-525875"  ...
	* Stopping node "no-preload-525875"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 22:16:50.746345  253618 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:16:50.746475  253618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:16:50.746484  253618 out.go:309] Setting ErrFile to fd 2...
	I0817 22:16:50.746489  253618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:16:50.746681  253618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:16:50.746928  253618 out.go:303] Setting JSON to false
	I0817 22:16:50.747009  253618 mustload.go:65] Loading cluster: no-preload-525875
	I0817 22:16:50.747330  253618 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:16:50.747412  253618 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/config.json ...
	I0817 22:16:50.747581  253618 mustload.go:65] Loading cluster: no-preload-525875
	I0817 22:16:50.747690  253618 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:16:50.747727  253618 stop.go:39] StopHost: no-preload-525875
	I0817 22:16:50.748077  253618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:16:50.748153  253618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:16:50.762822  253618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0817 22:16:50.763342  253618 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:16:50.764120  253618 main.go:141] libmachine: Using API Version  1
	I0817 22:16:50.764159  253618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:16:50.764589  253618 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:16:50.767243  253618 out.go:177] * Stopping node "no-preload-525875"  ...
	I0817 22:16:50.768718  253618 main.go:141] libmachine: Stopping "no-preload-525875"...
	I0817 22:16:50.768737  253618 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:16:50.770788  253618 main.go:141] libmachine: (no-preload-525875) Calling .Stop
	I0817 22:16:50.774296  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 0/60
	I0817 22:16:51.777448  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 1/60
	I0817 22:16:52.778956  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 2/60
	I0817 22:16:53.780976  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 3/60
	I0817 22:16:54.782289  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 4/60
	I0817 22:16:55.784440  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 5/60
	I0817 22:16:56.785821  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 6/60
	I0817 22:16:57.787153  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 7/60
	I0817 22:16:58.789361  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 8/60
	I0817 22:16:59.791167  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 9/60
	I0817 22:17:00.792790  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 10/60
	I0817 22:17:01.794171  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 11/60
	I0817 22:17:02.795649  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 12/60
	I0817 22:17:03.797794  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 13/60
	I0817 22:17:04.799508  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 14/60
	I0817 22:17:05.801740  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 15/60
	I0817 22:17:06.803312  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 16/60
	I0817 22:17:07.804526  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 17/60
	I0817 22:17:08.806098  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 18/60
	I0817 22:17:09.807497  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 19/60
	I0817 22:17:10.810025  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 20/60
	I0817 22:17:11.812320  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 21/60
	I0817 22:17:12.813799  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 22/60
	I0817 22:17:13.815182  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 23/60
	I0817 22:17:14.816516  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 24/60
	I0817 22:17:15.818694  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 25/60
	I0817 22:17:16.820545  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 26/60
	I0817 22:17:17.821737  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 27/60
	I0817 22:17:18.823155  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 28/60
	I0817 22:17:19.824344  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 29/60
	I0817 22:17:20.825655  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 30/60
	I0817 22:17:21.827080  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 31/60
	I0817 22:17:22.828876  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 32/60
	I0817 22:17:23.830457  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 33/60
	I0817 22:17:24.831851  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 34/60
	I0817 22:17:25.833493  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 35/60
	I0817 22:17:26.834885  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 36/60
	I0817 22:17:27.836752  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 37/60
	I0817 22:17:28.838240  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 38/60
	I0817 22:17:29.840412  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 39/60
	I0817 22:17:30.841941  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 40/60
	I0817 22:17:31.843083  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 41/60
	I0817 22:17:32.844570  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 42/60
	I0817 22:17:33.845925  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 43/60
	I0817 22:17:34.847462  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 44/60
	I0817 22:17:35.849640  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 45/60
	I0817 22:17:36.851011  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 46/60
	I0817 22:17:37.852368  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 47/60
	I0817 22:17:38.853714  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 48/60
	I0817 22:17:39.855303  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 49/60
	I0817 22:17:40.857431  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 50/60
	I0817 22:17:41.858804  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 51/60
	I0817 22:17:42.860524  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 52/60
	I0817 22:17:43.862328  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 53/60
	I0817 22:17:44.864974  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 54/60
	I0817 22:17:45.867365  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 55/60
	I0817 22:17:46.868797  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 56/60
	I0817 22:17:47.870174  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 57/60
	I0817 22:17:48.871809  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 58/60
	I0817 22:17:49.873107  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 59/60
	I0817 22:17:50.873949  253618 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0817 22:17:50.874017  253618 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:17:50.874041  253618 retry.go:31] will retry after 1.068229647s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:17:51.943310  253618 stop.go:39] StopHost: no-preload-525875
	I0817 22:17:51.943702  253618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:17:51.943751  253618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:17:51.959245  253618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0817 22:17:51.959787  253618 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:17:51.960276  253618 main.go:141] libmachine: Using API Version  1
	I0817 22:17:51.960298  253618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:17:51.960611  253618 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:17:51.962967  253618 out.go:177] * Stopping node "no-preload-525875"  ...
	I0817 22:17:51.964457  253618 main.go:141] libmachine: Stopping "no-preload-525875"...
	I0817 22:17:51.964471  253618 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:17:51.966040  253618 main.go:141] libmachine: (no-preload-525875) Calling .Stop
	I0817 22:17:51.969170  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 0/60
	I0817 22:17:52.970808  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 1/60
	I0817 22:17:53.972536  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 2/60
	I0817 22:17:54.974168  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 3/60
	I0817 22:17:55.975797  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 4/60
	I0817 22:17:56.977956  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 5/60
	I0817 22:17:57.979621  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 6/60
	I0817 22:17:58.981072  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 7/60
	I0817 22:17:59.982652  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 8/60
	I0817 22:18:00.984505  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 9/60
	I0817 22:18:01.986582  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 10/60
	I0817 22:18:02.987971  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 11/60
	I0817 22:18:03.989320  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 12/60
	I0817 22:18:04.990846  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 13/60
	I0817 22:18:05.992838  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 14/60
	I0817 22:18:06.994784  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 15/60
	I0817 22:18:07.996418  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 16/60
	I0817 22:18:08.997602  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 17/60
	I0817 22:18:09.999039  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 18/60
	I0817 22:18:11.000528  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 19/60
	I0817 22:18:12.002578  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 20/60
	I0817 22:18:13.004577  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 21/60
	I0817 22:18:14.006140  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 22/60
	I0817 22:18:15.007710  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 23/60
	I0817 22:18:16.009271  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 24/60
	I0817 22:18:17.011262  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 25/60
	I0817 22:18:18.012651  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 26/60
	I0817 22:18:19.014252  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 27/60
	I0817 22:18:20.015606  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 28/60
	I0817 22:18:21.017191  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 29/60
	I0817 22:18:22.018717  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 30/60
	I0817 22:18:23.020186  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 31/60
	I0817 22:18:24.021561  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 32/60
	I0817 22:18:25.023212  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 33/60
	I0817 22:18:26.024715  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 34/60
	I0817 22:18:27.026526  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 35/60
	I0817 22:18:28.028043  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 36/60
	I0817 22:18:29.029411  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 37/60
	I0817 22:18:30.031096  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 38/60
	I0817 22:18:31.032515  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 39/60
	I0817 22:18:32.034550  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 40/60
	I0817 22:18:33.036045  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 41/60
	I0817 22:18:34.037538  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 42/60
	I0817 22:18:35.039156  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 43/60
	I0817 22:18:36.040497  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 44/60
	I0817 22:18:37.042569  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 45/60
	I0817 22:18:38.044027  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 46/60
	I0817 22:18:39.045542  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 47/60
	I0817 22:18:40.047233  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 48/60
	I0817 22:18:41.048754  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 49/60
	I0817 22:18:42.050993  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 50/60
	I0817 22:18:43.052278  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 51/60
	I0817 22:18:44.053581  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 52/60
	I0817 22:18:45.055081  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 53/60
	I0817 22:18:46.056902  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 54/60
	I0817 22:18:47.058992  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 55/60
	I0817 22:18:48.060407  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 56/60
	I0817 22:18:49.061910  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 57/60
	I0817 22:18:50.063371  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 58/60
	I0817 22:18:51.064880  253618 main.go:141] libmachine: (no-preload-525875) Waiting for machine to stop 59/60
	I0817 22:18:52.065878  253618 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0817 22:18:52.065925  253618 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:18:52.067961  253618 out.go:177] 
	W0817 22:18:52.069446  253618 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0817 22:18:52.069458  253618 out.go:239] * 
	* 
	W0817 22:18:52.075324  253618 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 22:18:52.076908  253618 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-525875 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-525875 -n no-preload-525875
E0817 22:18:53.210098  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:19:03.451193  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-525875 -n no-preload-525875: exit status 3 (18.448138131s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:19:10.526471  254732 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host
	E0817 22:19:10.526495  254732 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-525875" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-437183 --alsologtostderr -v=3
E0817 22:17:12.229028  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:17:14.045802  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:14.051088  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:14.061398  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:14.081726  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:14.122128  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:14.202512  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:14.363006  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:14.683629  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:15.323892  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:16.604558  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:17.607431  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:17:19.165408  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:24.286430  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-437183 --alsologtostderr -v=3: exit status 82 (2m1.370815915s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-437183"  ...
	* Stopping node "embed-certs-437183"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 22:17:10.308977  253828 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:17:10.309126  253828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:17:10.309140  253828 out.go:309] Setting ErrFile to fd 2...
	I0817 22:17:10.309146  253828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:17:10.309355  253828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:17:10.309641  253828 out.go:303] Setting JSON to false
	I0817 22:17:10.309731  253828 mustload.go:65] Loading cluster: embed-certs-437183
	I0817 22:17:10.310108  253828 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:17:10.310201  253828 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/config.json ...
	I0817 22:17:10.310373  253828 mustload.go:65] Loading cluster: embed-certs-437183
	I0817 22:17:10.310480  253828 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:17:10.310509  253828 stop.go:39] StopHost: embed-certs-437183
	I0817 22:17:10.310910  253828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:17:10.310966  253828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:17:10.326190  253828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0817 22:17:10.326726  253828 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:17:10.327339  253828 main.go:141] libmachine: Using API Version  1
	I0817 22:17:10.327362  253828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:17:10.327728  253828 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:17:10.330917  253828 out.go:177] * Stopping node "embed-certs-437183"  ...
	I0817 22:17:10.332890  253828 main.go:141] libmachine: Stopping "embed-certs-437183"...
	I0817 22:17:10.332915  253828 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:17:10.334788  253828 main.go:141] libmachine: (embed-certs-437183) Calling .Stop
	I0817 22:17:10.338171  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 0/60
	I0817 22:17:11.340734  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 1/60
	I0817 22:17:12.342394  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 2/60
	I0817 22:17:13.344140  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 3/60
	I0817 22:17:14.345631  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 4/60
	I0817 22:17:15.347109  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 5/60
	I0817 22:17:16.348656  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 6/60
	I0817 22:17:17.349964  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 7/60
	I0817 22:17:18.351498  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 8/60
	I0817 22:17:19.352835  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 9/60
	I0817 22:17:20.354081  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 10/60
	I0817 22:17:21.356159  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 11/60
	I0817 22:17:22.357488  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 12/60
	I0817 22:17:23.359601  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 13/60
	I0817 22:17:24.361153  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 14/60
	I0817 22:17:25.363512  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 15/60
	I0817 22:17:26.364917  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 16/60
	I0817 22:17:27.366541  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 17/60
	I0817 22:17:28.368626  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 18/60
	I0817 22:17:29.370553  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 19/60
	I0817 22:17:30.372901  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 20/60
	I0817 22:17:31.374363  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 21/60
	I0817 22:17:32.375784  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 22/60
	I0817 22:17:33.377230  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 23/60
	I0817 22:17:34.378529  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 24/60
	I0817 22:17:35.380424  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 25/60
	I0817 22:17:36.381913  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 26/60
	I0817 22:17:37.383263  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 27/60
	I0817 22:17:38.384945  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 28/60
	I0817 22:17:39.386217  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 29/60
	I0817 22:17:40.388717  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 30/60
	I0817 22:17:41.390409  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 31/60
	I0817 22:17:42.391844  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 32/60
	I0817 22:17:43.393525  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 33/60
	I0817 22:17:44.395683  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 34/60
	I0817 22:17:45.397810  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 35/60
	I0817 22:17:46.399374  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 36/60
	I0817 22:17:47.401063  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 37/60
	I0817 22:17:48.402567  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 38/60
	I0817 22:17:49.403968  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 39/60
	I0817 22:17:50.406166  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 40/60
	I0817 22:17:51.407695  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 41/60
	I0817 22:17:52.409054  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 42/60
	I0817 22:17:53.410428  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 43/60
	I0817 22:17:54.411689  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 44/60
	I0817 22:17:55.413925  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 45/60
	I0817 22:17:56.415899  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 46/60
	I0817 22:17:57.417389  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 47/60
	I0817 22:17:58.419060  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 48/60
	I0817 22:17:59.420590  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 49/60
	I0817 22:18:00.423253  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 50/60
	I0817 22:18:01.424635  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 51/60
	I0817 22:18:02.426250  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 52/60
	I0817 22:18:03.427646  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 53/60
	I0817 22:18:04.429178  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 54/60
	I0817 22:18:05.431179  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 55/60
	I0817 22:18:06.432489  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 56/60
	I0817 22:18:07.434085  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 57/60
	I0817 22:18:08.435248  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 58/60
	I0817 22:18:09.436544  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 59/60
	I0817 22:18:10.437285  253828 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0817 22:18:10.437360  253828 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:18:10.437382  253828 retry.go:31] will retry after 1.060265538s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:18:11.498572  253828 stop.go:39] StopHost: embed-certs-437183
	I0817 22:18:11.498981  253828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:18:11.499033  253828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:18:11.514359  253828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40747
	I0817 22:18:11.514886  253828 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:18:11.515391  253828 main.go:141] libmachine: Using API Version  1
	I0817 22:18:11.515411  253828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:18:11.515781  253828 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:18:11.518110  253828 out.go:177] * Stopping node "embed-certs-437183"  ...
	I0817 22:18:11.519760  253828 main.go:141] libmachine: Stopping "embed-certs-437183"...
	I0817 22:18:11.519779  253828 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:18:11.521502  253828 main.go:141] libmachine: (embed-certs-437183) Calling .Stop
	I0817 22:18:11.524969  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 0/60
	I0817 22:18:12.526533  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 1/60
	I0817 22:18:13.528022  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 2/60
	I0817 22:18:14.529565  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 3/60
	I0817 22:18:15.531132  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 4/60
	I0817 22:18:16.533294  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 5/60
	I0817 22:18:17.534937  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 6/60
	I0817 22:18:18.536652  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 7/60
	I0817 22:18:19.538256  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 8/60
	I0817 22:18:20.539618  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 9/60
	I0817 22:18:21.541848  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 10/60
	I0817 22:18:22.543439  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 11/60
	I0817 22:18:23.544977  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 12/60
	I0817 22:18:24.546393  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 13/60
	I0817 22:18:25.548061  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 14/60
	I0817 22:18:26.550313  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 15/60
	I0817 22:18:27.552088  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 16/60
	I0817 22:18:28.553640  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 17/60
	I0817 22:18:29.555269  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 18/60
	I0817 22:18:30.557084  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 19/60
	I0817 22:18:31.559319  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 20/60
	I0817 22:18:32.560837  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 21/60
	I0817 22:18:33.562379  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 22/60
	I0817 22:18:34.563912  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 23/60
	I0817 22:18:35.565617  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 24/60
	I0817 22:18:36.567396  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 25/60
	I0817 22:18:37.568937  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 26/60
	I0817 22:18:38.570725  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 27/60
	I0817 22:18:39.573152  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 28/60
	I0817 22:18:40.574616  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 29/60
	I0817 22:18:41.576522  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 30/60
	I0817 22:18:42.578071  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 31/60
	I0817 22:18:43.579492  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 32/60
	I0817 22:18:44.581259  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 33/60
	I0817 22:18:45.582856  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 34/60
	I0817 22:18:46.584708  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 35/60
	I0817 22:18:47.586264  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 36/60
	I0817 22:18:48.587743  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 37/60
	I0817 22:18:49.589282  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 38/60
	I0817 22:18:50.590726  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 39/60
	I0817 22:18:51.593056  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 40/60
	I0817 22:18:52.594677  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 41/60
	I0817 22:18:53.596260  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 42/60
	I0817 22:18:54.597719  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 43/60
	I0817 22:18:55.599981  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 44/60
	I0817 22:18:56.601931  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 45/60
	I0817 22:18:57.603369  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 46/60
	I0817 22:18:58.604798  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 47/60
	I0817 22:18:59.606326  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 48/60
	I0817 22:19:00.607889  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 49/60
	I0817 22:19:01.610150  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 50/60
	I0817 22:19:02.611635  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 51/60
	I0817 22:19:03.613163  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 52/60
	I0817 22:19:04.614579  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 53/60
	I0817 22:19:05.616009  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 54/60
	I0817 22:19:06.617844  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 55/60
	I0817 22:19:07.618917  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 56/60
	I0817 22:19:08.620704  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 57/60
	I0817 22:19:09.622041  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 58/60
	I0817 22:19:10.624271  253828 main.go:141] libmachine: (embed-certs-437183) Waiting for machine to stop 59/60
	I0817 22:19:11.625323  253828 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0817 22:19:11.625375  253828 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:19:11.627449  253828 out.go:177] 
	W0817 22:19:11.629181  253828 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0817 22:19:11.629209  253828 out.go:239] * 
	* 
	W0817 22:19:11.633483  253828 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 22:19:11.635175  253828 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-437183 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-437183 -n embed-certs-437183
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-437183 -n embed-certs-437183: exit status 3 (18.601430591s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:19:30.238424  254885 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E0817 22:19:30.238446  254885 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-437183" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-321287 --alsologtostderr -v=3
E0817 22:17:55.007698  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:17:56.110280  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:17:56.115544  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:17:56.125824  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:17:56.146138  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:17:56.187114  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:17:56.267529  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:17:56.428208  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:17:56.748840  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:17:57.389823  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:17:58.670626  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:18:01.231147  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:18:06.351450  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:18:09.344485  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 22:18:16.591623  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:18:34.150206  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:18:35.968195  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:18:37.071865  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:18:39.528423  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:18:42.967147  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:18:42.972486  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:18:42.982850  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:18:43.003220  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:18:43.043643  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:18:43.124685  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:18:43.285822  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:18:43.606767  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:18:44.247437  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:18:45.527636  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-321287 --alsologtostderr -v=3: exit status 82 (2m1.553832619s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-321287"  ...
	* Stopping node "default-k8s-diff-port-321287"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 22:17:44.114658  254008 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:17:44.114812  254008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:17:44.114835  254008 out.go:309] Setting ErrFile to fd 2...
	I0817 22:17:44.114839  254008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:17:44.115087  254008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:17:44.115364  254008 out.go:303] Setting JSON to false
	I0817 22:17:44.115456  254008 mustload.go:65] Loading cluster: default-k8s-diff-port-321287
	I0817 22:17:44.115826  254008 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:17:44.115920  254008 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:17:44.116111  254008 mustload.go:65] Loading cluster: default-k8s-diff-port-321287
	I0817 22:17:44.116246  254008 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:17:44.116284  254008 stop.go:39] StopHost: default-k8s-diff-port-321287
	I0817 22:17:44.116665  254008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:17:44.116748  254008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:17:44.131601  254008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0817 22:17:44.132145  254008 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:17:44.132818  254008 main.go:141] libmachine: Using API Version  1
	I0817 22:17:44.132845  254008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:17:44.133203  254008 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:17:44.136041  254008 out.go:177] * Stopping node "default-k8s-diff-port-321287"  ...
	I0817 22:17:44.137615  254008 main.go:141] libmachine: Stopping "default-k8s-diff-port-321287"...
	I0817 22:17:44.137642  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:17:44.139372  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Stop
	I0817 22:17:44.142723  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 0/60
	I0817 22:17:45.144149  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 1/60
	I0817 22:17:46.145475  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 2/60
	I0817 22:17:47.146930  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 3/60
	I0817 22:17:48.148464  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 4/60
	I0817 22:17:49.150785  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 5/60
	I0817 22:17:50.152253  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 6/60
	I0817 22:17:51.153680  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 7/60
	I0817 22:17:52.155126  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 8/60
	I0817 22:17:53.156470  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 9/60
	I0817 22:17:54.157964  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 10/60
	I0817 22:17:55.159434  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 11/60
	I0817 22:17:56.160761  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 12/60
	I0817 22:17:57.162288  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 13/60
	I0817 22:17:58.163716  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 14/60
	I0817 22:17:59.165826  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 15/60
	I0817 22:18:00.167164  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 16/60
	I0817 22:18:01.168799  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 17/60
	I0817 22:18:02.170417  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 18/60
	I0817 22:18:03.172670  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 19/60
	I0817 22:18:04.174882  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 20/60
	I0817 22:18:05.176734  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 21/60
	I0817 22:18:06.178038  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 22/60
	I0817 22:18:07.179638  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 23/60
	I0817 22:18:08.180904  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 24/60
	I0817 22:18:09.182948  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 25/60
	I0817 22:18:10.184359  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 26/60
	I0817 22:18:11.186037  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 27/60
	I0817 22:18:12.187788  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 28/60
	I0817 22:18:13.189061  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 29/60
	I0817 22:18:14.191315  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 30/60
	I0817 22:18:15.192891  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 31/60
	I0817 22:18:16.194256  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 32/60
	I0817 22:18:17.195852  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 33/60
	I0817 22:18:18.197333  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 34/60
	I0817 22:18:19.199741  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 35/60
	I0817 22:18:20.201233  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 36/60
	I0817 22:18:21.202851  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 37/60
	I0817 22:18:22.204573  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 38/60
	I0817 22:18:23.206043  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 39/60
	I0817 22:18:24.207538  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 40/60
	I0817 22:18:25.209208  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 41/60
	I0817 22:18:26.210618  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 42/60
	I0817 22:18:27.212731  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 43/60
	I0817 22:18:28.214298  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 44/60
	I0817 22:18:29.216515  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 45/60
	I0817 22:18:30.218281  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 46/60
	I0817 22:18:31.219842  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 47/60
	I0817 22:18:32.221398  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 48/60
	I0817 22:18:33.222926  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 49/60
	I0817 22:18:34.225219  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 50/60
	I0817 22:18:35.226759  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 51/60
	I0817 22:18:36.228685  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 52/60
	I0817 22:18:37.230078  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 53/60
	I0817 22:18:38.231495  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 54/60
	I0817 22:18:39.233595  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 55/60
	I0817 22:18:40.235054  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 56/60
	I0817 22:18:41.236548  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 57/60
	I0817 22:18:42.238179  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 58/60
	I0817 22:18:43.239464  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 59/60
	I0817 22:18:44.240047  254008 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0817 22:18:44.240123  254008 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:18:44.240154  254008 retry.go:31] will retry after 1.249167717s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:18:45.490537  254008 stop.go:39] StopHost: default-k8s-diff-port-321287
	I0817 22:18:45.491014  254008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:18:45.491064  254008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:18:45.505933  254008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I0817 22:18:45.506410  254008 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:18:45.506902  254008 main.go:141] libmachine: Using API Version  1
	I0817 22:18:45.506927  254008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:18:45.507276  254008 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:18:45.509493  254008 out.go:177] * Stopping node "default-k8s-diff-port-321287"  ...
	I0817 22:18:45.510968  254008 main.go:141] libmachine: Stopping "default-k8s-diff-port-321287"...
	I0817 22:18:45.510988  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:18:45.512592  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Stop
	I0817 22:18:45.516393  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 0/60
	I0817 22:18:46.517889  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 1/60
	I0817 22:18:47.519408  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 2/60
	I0817 22:18:48.521310  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 3/60
	I0817 22:18:49.522757  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 4/60
	I0817 22:18:50.524738  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 5/60
	I0817 22:18:51.526033  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 6/60
	I0817 22:18:52.527578  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 7/60
	I0817 22:18:53.529018  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 8/60
	I0817 22:18:54.530540  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 9/60
	I0817 22:18:55.532854  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 10/60
	I0817 22:18:56.534516  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 11/60
	I0817 22:18:57.536088  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 12/60
	I0817 22:18:58.537587  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 13/60
	I0817 22:18:59.539171  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 14/60
	I0817 22:19:00.541366  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 15/60
	I0817 22:19:01.542836  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 16/60
	I0817 22:19:02.544348  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 17/60
	I0817 22:19:03.545832  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 18/60
	I0817 22:19:04.547276  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 19/60
	I0817 22:19:05.549530  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 20/60
	I0817 22:19:06.551144  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 21/60
	I0817 22:19:07.552482  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 22/60
	I0817 22:19:08.554032  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 23/60
	I0817 22:19:09.555398  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 24/60
	I0817 22:19:10.557321  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 25/60
	I0817 22:19:11.558799  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 26/60
	I0817 22:19:12.560082  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 27/60
	I0817 22:19:13.561435  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 28/60
	I0817 22:19:14.562835  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 29/60
	I0817 22:19:15.564882  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 30/60
	I0817 22:19:16.566697  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 31/60
	I0817 22:19:17.568172  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 32/60
	I0817 22:19:18.569694  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 33/60
	I0817 22:19:19.571109  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 34/60
	I0817 22:19:20.573286  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 35/60
	I0817 22:19:21.574767  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 36/60
	I0817 22:19:22.576339  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 37/60
	I0817 22:19:23.577727  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 38/60
	I0817 22:19:24.579248  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 39/60
	I0817 22:19:25.581372  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 40/60
	I0817 22:19:26.582880  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 41/60
	I0817 22:19:27.584240  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 42/60
	I0817 22:19:28.585788  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 43/60
	I0817 22:19:29.587381  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 44/60
	I0817 22:19:30.589548  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 45/60
	I0817 22:19:31.591087  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 46/60
	I0817 22:19:32.592544  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 47/60
	I0817 22:19:33.594360  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 48/60
	I0817 22:19:34.595836  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 49/60
	I0817 22:19:35.597988  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 50/60
	I0817 22:19:36.599320  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 51/60
	I0817 22:19:37.600828  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 52/60
	I0817 22:19:38.602418  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 53/60
	I0817 22:19:39.604572  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 54/60
	I0817 22:19:40.606744  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 55/60
	I0817 22:19:41.608299  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 56/60
	I0817 22:19:42.609736  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 57/60
	I0817 22:19:43.611315  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 58/60
	I0817 22:19:44.613070  254008 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for machine to stop 59/60
	I0817 22:19:45.614203  254008 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0817 22:19:45.614263  254008 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0817 22:19:45.616830  254008 out.go:177] 
	W0817 22:19:45.618655  254008 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0817 22:19:45.618672  254008 out.go:239] * 
	* 
	W0817 22:19:45.622968  254008 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 22:19:45.624712  254008 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-321287 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
E0817 22:19:55.766170  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:57.889281  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287: exit status 3 (18.658624214s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:20:04.286405  255249 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host
	E0817 22:20:04.286431  255249 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321287" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-294781 -n old-k8s-version-294781
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-294781 -n old-k8s-version-294781: exit status 3 (3.168094816s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:19:07.550469  254785 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.56:22: connect: no route to host
	E0817 22:19:07.550492  254785 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.56:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-294781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-294781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157047278s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.56:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-294781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-294781 -n old-k8s-version-294781
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-294781 -n old-k8s-version-294781: exit status 3 (3.059233102s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:19:16.766547  254925 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.56:22: connect: no route to host
	E0817 22:19:16.766577  254925 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.56:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-294781" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-525875 -n no-preload-525875
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-525875 -n no-preload-525875: exit status 3 (3.168187311s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:19:13.694508  254844 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host
	E0817 22:19:13.694535  254844 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-525875 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-525875 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155400754s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-525875 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-525875 -n no-preload-525875
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-525875 -n no-preload-525875: exit status 3 (3.059760701s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:19:22.910507  255016 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host
	E0817 22:19:22.910525  255016 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-525875" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-437183 -n embed-certs-437183
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-437183 -n embed-certs-437183: exit status 3 (3.168227366s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:19:33.406468  255102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E0817 22:19:33.406492  255102 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-437183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0817 22:19:35.283372  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:35.288688  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:35.299023  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:35.319455  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:35.359864  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:35.440287  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:35.601276  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:35.922140  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:36.563294  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:19:37.843582  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-437183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157411576s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-437183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-437183 -n embed-certs-437183
E0817 22:19:40.404324  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-437183 -n embed-certs-437183: exit status 3 (3.058379111s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:19:42.622478  255174 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E0817 22:19:42.622496  255174 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-437183" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
E0817 22:20:04.893106  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287: exit status 3 (3.167869925s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:20:07.454480  255364 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host
	E0817 22:20:07.454512  255364 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-321287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-321287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156971013s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-321287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
E0817 22:20:14.713833  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 22:20:16.246539  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287: exit status 3 (3.058879498s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 22:20:16.670502  255451 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host
	E0817 22:20:16.670530  255451 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321287" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-525875 -n no-preload-525875
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-08-17 22:38:41.090286646 +0000 UTC m=+5294.731054378
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-525875 -n no-preload-525875
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-525875 logs -n 25
E0817 22:38:42.966242  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-525875 logs -n 25: (1.81205559s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-975779 sudo cat                              | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo find                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo crio                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-975779                                       | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-340676 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | disable-driver-mounts-340676                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:17 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-294781        | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-525875             | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-437183            | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-321287  | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-294781             | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-525875                  | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:29 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-437183                 | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-321287       | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC | 17 Aug 23 22:30 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 22:20:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 22:20:16.712686  255491 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:20:16.712825  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.712835  255491 out.go:309] Setting ErrFile to fd 2...
	I0817 22:20:16.712839  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.713062  255491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:20:16.713667  255491 out.go:303] Setting JSON to false
	I0817 22:20:16.714624  255491 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25342,"bootTime":1692285475,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:20:16.714682  255491 start.go:138] virtualization: kvm guest
	I0817 22:20:16.717535  255491 out.go:177] * [default-k8s-diff-port-321287] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:20:16.719151  255491 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:20:16.720536  255491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:20:16.719158  255491 notify.go:220] Checking for updates...
	I0817 22:20:16.724470  255491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:20:16.726182  255491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:20:16.727902  255491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:20:16.729516  255491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:20:16.731373  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:20:16.731749  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.731825  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.746961  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0817 22:20:16.747404  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.748088  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.748116  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.748449  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.748618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.748847  255491 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:20:16.749194  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.749239  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.764882  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I0817 22:20:16.765357  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.765874  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.765901  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.766289  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.766480  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.802457  255491 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:20:16.804215  255491 start.go:298] selected driver: kvm2
	I0817 22:20:16.804235  255491 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 Cl
usterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.804379  255491 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:20:16.805157  255491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.805248  255491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:20:16.821166  255491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:20:16.821564  255491 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 22:20:16.821606  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:20:16.821619  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:20:16.821631  255491 start_flags.go:319] config:
	{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.821815  255491 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.823863  255491 out.go:177] * Starting control plane node default-k8s-diff-port-321287 in cluster default-k8s-diff-port-321287
	I0817 22:20:16.825296  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:20:16.825350  255491 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 22:20:16.825365  255491 cache.go:57] Caching tarball of preloaded images
	I0817 22:20:16.825521  255491 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 22:20:16.825536  255491 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 22:20:16.825660  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:20:16.825870  255491 start.go:365] acquiring machines lock for default-k8s-diff-port-321287: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:20:17.790384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:20.862432  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:26.942301  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:30.014393  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:36.094411  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:39.166376  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:45.246382  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:48.318418  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:54.398388  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:57.470394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:03.550380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:06.622365  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:12.702351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:15.774370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:21.854413  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:24.926351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:31.006415  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:34.078332  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:40.158437  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:43.230410  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:49.310359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:52.382386  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:58.462394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:01.534395  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:07.614359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:10.686384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:16.766363  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:19.838352  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:25.918380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:28.990416  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:35.070383  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:38.142364  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:44.222341  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:47.294387  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:53.374378  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:56.446375  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:02.526335  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:05.598406  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:11.678435  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:14.750370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:20.830484  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:23.902346  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:29.982456  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:33.054379  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:39.134436  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:42.206472  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:48.286396  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:51.358348  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:54.362645  255057 start.go:369] acquired machines lock for "no-preload-525875" in 4m31.301140971s
	I0817 22:23:54.362883  255057 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:23:54.362929  255057 fix.go:54] fixHost starting: 
	I0817 22:23:54.363423  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:23:54.363467  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:23:54.379127  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0817 22:23:54.379699  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:23:54.380334  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:23:54.380357  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:23:54.380797  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:23:54.381004  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:23:54.381209  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:23:54.383099  255057 fix.go:102] recreateIfNeeded on no-preload-525875: state=Stopped err=<nil>
	I0817 22:23:54.383145  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	W0817 22:23:54.383332  255057 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:23:54.385187  255057 out.go:177] * Restarting existing kvm2 VM for "no-preload-525875" ...
	I0817 22:23:54.360325  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:23:54.360394  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:23:54.362467  254975 machine.go:91] provisioned docker machine in 4m37.411699893s
	I0817 22:23:54.362520  254975 fix.go:56] fixHost completed within 4m37.434281244s
	I0817 22:23:54.362529  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 4m37.434304432s
	W0817 22:23:54.362577  254975 start.go:672] error starting host: provision: host is not running
	W0817 22:23:54.363017  254975 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0817 22:23:54.363033  254975 start.go:687] Will try again in 5 seconds ...
	I0817 22:23:54.386615  255057 main.go:141] libmachine: (no-preload-525875) Calling .Start
	I0817 22:23:54.386791  255057 main.go:141] libmachine: (no-preload-525875) Ensuring networks are active...
	I0817 22:23:54.387647  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network default is active
	I0817 22:23:54.387973  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network mk-no-preload-525875 is active
	I0817 22:23:54.388332  255057 main.go:141] libmachine: (no-preload-525875) Getting domain xml...
	I0817 22:23:54.389183  255057 main.go:141] libmachine: (no-preload-525875) Creating domain...
	I0817 22:23:55.639391  255057 main.go:141] libmachine: (no-preload-525875) Waiting to get IP...
	I0817 22:23:55.640405  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.640824  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.640956  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.640807  256033 retry.go:31] will retry after 256.854902ms: waiting for machine to come up
	I0817 22:23:55.899499  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.900003  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.900027  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.899976  256033 retry.go:31] will retry after 327.686689ms: waiting for machine to come up
	I0817 22:23:56.229604  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.230132  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.230156  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.230040  256033 retry.go:31] will retry after 464.52975ms: waiting for machine to come up
	I0817 22:23:56.695962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.696359  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.696397  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.696313  256033 retry.go:31] will retry after 556.975938ms: waiting for machine to come up
	I0817 22:23:57.255156  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.255625  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.255664  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.255564  256033 retry.go:31] will retry after 654.756806ms: waiting for machine to come up
	I0817 22:23:57.911407  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.911781  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.911805  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.911733  256033 retry.go:31] will retry after 915.751745ms: waiting for machine to come up
	I0817 22:23:59.364671  254975 start.go:365] acquiring machines lock for old-k8s-version-294781: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:23:58.828834  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:58.829178  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:58.829236  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:58.829153  256033 retry.go:31] will retry after 1.176413613s: waiting for machine to come up
	I0817 22:24:00.006988  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:00.007533  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:00.007603  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:00.007525  256033 retry.go:31] will retry after 1.031006631s: waiting for machine to come up
	I0817 22:24:01.039920  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:01.040354  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:01.040386  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:01.040293  256033 retry.go:31] will retry after 1.781447675s: waiting for machine to come up
	I0817 22:24:02.823240  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:02.823711  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:02.823755  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:02.823652  256033 retry.go:31] will retry after 1.47392319s: waiting for machine to come up
	I0817 22:24:04.299094  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:04.299543  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:04.299572  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:04.299479  256033 retry.go:31] will retry after 1.990284782s: waiting for machine to come up
	I0817 22:24:06.292369  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:06.292831  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:06.292862  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:06.292749  256033 retry.go:31] will retry after 3.34318874s: waiting for machine to come up
	I0817 22:24:09.637907  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:09.638389  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:09.638423  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:09.638335  256033 retry.go:31] will retry after 3.298106143s: waiting for machine to come up
	I0817 22:24:12.939215  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939668  255057 main.go:141] libmachine: (no-preload-525875) Found IP for machine: 192.168.61.196
	I0817 22:24:12.939692  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has current primary IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939709  255057 main.go:141] libmachine: (no-preload-525875) Reserving static IP address...
	I0817 22:24:12.940293  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.940330  255057 main.go:141] libmachine: (no-preload-525875) Reserved static IP address: 192.168.61.196
	I0817 22:24:12.940347  255057 main.go:141] libmachine: (no-preload-525875) DBG | skip adding static IP to network mk-no-preload-525875 - found existing host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"}
	I0817 22:24:12.940364  255057 main.go:141] libmachine: (no-preload-525875) DBG | Getting to WaitForSSH function...
	I0817 22:24:12.940381  255057 main.go:141] libmachine: (no-preload-525875) Waiting for SSH to be available...
	I0817 22:24:12.942523  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.942835  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.942870  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.943013  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH client type: external
	I0817 22:24:12.943058  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa (-rw-------)
	I0817 22:24:12.943104  255057 main.go:141] libmachine: (no-preload-525875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:12.943125  255057 main.go:141] libmachine: (no-preload-525875) DBG | About to run SSH command:
	I0817 22:24:12.943135  255057 main.go:141] libmachine: (no-preload-525875) DBG | exit 0
	I0817 22:24:14.123211  255215 start.go:369] acquired machines lock for "embed-certs-437183" in 4m31.345681226s
	I0817 22:24:14.123281  255215 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:14.123298  255215 fix.go:54] fixHost starting: 
	I0817 22:24:14.123769  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:14.123822  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:14.141321  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0817 22:24:14.141722  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:14.142372  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:24:14.142409  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:14.142871  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:14.143076  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:14.143300  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:24:14.144928  255215 fix.go:102] recreateIfNeeded on embed-certs-437183: state=Stopped err=<nil>
	I0817 22:24:14.144960  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	W0817 22:24:14.145216  255215 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:14.148036  255215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-437183" ...
	I0817 22:24:13.033987  255057 main.go:141] libmachine: (no-preload-525875) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:13.034450  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetConfigRaw
	I0817 22:24:13.035251  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.037756  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038141  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.038176  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038475  255057 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/config.json ...
	I0817 22:24:13.038679  255057 machine.go:88] provisioning docker machine ...
	I0817 22:24:13.038704  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.038922  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039086  255057 buildroot.go:166] provisioning hostname "no-preload-525875"
	I0817 22:24:13.039109  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039238  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.041385  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041666  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.041698  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041838  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.042022  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042206  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042396  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.042612  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.043170  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.043189  255057 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-525875 && echo "no-preload-525875" | sudo tee /etc/hostname
	I0817 22:24:13.177388  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-525875
	
	I0817 22:24:13.177433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.180249  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180571  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.180599  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180808  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.181054  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181224  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181371  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.181544  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.181969  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.181994  255057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-525875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-525875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-525875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:13.307614  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:13.307675  255057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:13.307719  255057 buildroot.go:174] setting up certificates
	I0817 22:24:13.307731  255057 provision.go:83] configureAuth start
	I0817 22:24:13.307745  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.308044  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.311084  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311457  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.311491  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311665  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.313712  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314066  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.314101  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314252  255057 provision.go:138] copyHostCerts
	I0817 22:24:13.314354  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:13.314397  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:13.314495  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:13.314610  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:13.314623  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:13.314661  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:13.314735  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:13.314745  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:13.314779  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:13.314841  255057 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.no-preload-525875 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube no-preload-525875]
	I0817 22:24:13.395589  255057 provision.go:172] copyRemoteCerts
	I0817 22:24:13.395693  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:13.395724  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.398603  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.398936  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.398972  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.399154  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.399379  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.399566  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.399717  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.487194  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:13.510918  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 22:24:13.534013  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:13.556876  255057 provision.go:86] duration metric: configureAuth took 249.122979ms
	I0817 22:24:13.556910  255057 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:13.557143  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:13.557265  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.560140  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560483  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.560514  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560748  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.560965  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561143  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561274  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.561516  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.562128  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.562155  255057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:13.863145  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:13.863181  255057 machine.go:91] provisioned docker machine in 824.487372ms
	I0817 22:24:13.863206  255057 start.go:300] post-start starting for "no-preload-525875" (driver="kvm2")
	I0817 22:24:13.863219  255057 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:13.863247  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.863636  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:13.863681  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.866612  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.866950  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.867000  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.867115  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.867333  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.867524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.867695  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.957157  255057 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:13.961765  255057 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:13.961801  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:13.961919  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:13.962002  255057 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:13.962116  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:13.971105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:13.999336  255057 start.go:303] post-start completed in 136.111451ms
	I0817 22:24:13.999367  255057 fix.go:56] fixHost completed within 19.636437946s
	I0817 22:24:13.999391  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.002294  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002689  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.002717  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002995  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.003236  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003572  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.003744  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:14.004145  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:14.004160  255057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:14.122987  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311054.069328214
	
	I0817 22:24:14.123011  255057 fix.go:206] guest clock: 1692311054.069328214
	I0817 22:24:14.123019  255057 fix.go:219] Guest: 2023-08-17 22:24:14.069328214 +0000 UTC Remote: 2023-08-17 22:24:13.999370872 +0000 UTC m=+291.082280559 (delta=69.957342ms)
	I0817 22:24:14.123080  255057 fix.go:190] guest clock delta is within tolerance: 69.957342ms
	I0817 22:24:14.123087  255057 start.go:83] releasing machines lock for "no-preload-525875", held for 19.760401588s
	I0817 22:24:14.123125  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.123445  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:14.126573  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.126925  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.126962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.127146  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127781  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127974  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.128071  255057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:14.128125  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.128226  255057 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:14.128258  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.131020  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131333  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131367  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131390  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.131715  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.131789  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131829  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131895  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.131975  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.132057  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.132156  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.132272  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.132425  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.219665  255057 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:14.247437  255057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:14.400674  255057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:14.408384  255057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:14.408502  255057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:14.423811  255057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:14.423860  255057 start.go:466] detecting cgroup driver to use...
	I0817 22:24:14.423953  255057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:14.436628  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:14.448671  255057 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:14.448765  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:14.461946  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:14.475294  255057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:14.581194  255057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:14.708045  255057 docker.go:212] disabling docker service ...
	I0817 22:24:14.708110  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:14.722033  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:14.733323  255057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:14.857587  255057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:14.980798  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:14.994728  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:15.012428  255057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:15.012505  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.021683  255057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:15.021763  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.031095  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.040825  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.050770  255057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:15.060644  255057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:15.068941  255057 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:15.069022  255057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:15.081634  255057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:15.090552  255057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:15.205174  255057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:15.383127  255057 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:15.383224  255057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:15.391893  255057 start.go:534] Will wait 60s for crictl version
	I0817 22:24:15.391983  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.398121  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:15.450273  255057 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:15.450368  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.506757  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.560170  255057 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.1 ...
	I0817 22:24:14.149845  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Start
	I0817 22:24:14.150032  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring networks are active...
	I0817 22:24:14.150803  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network default is active
	I0817 22:24:14.151110  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network mk-embed-certs-437183 is active
	I0817 22:24:14.151492  255215 main.go:141] libmachine: (embed-certs-437183) Getting domain xml...
	I0817 22:24:14.152247  255215 main.go:141] libmachine: (embed-certs-437183) Creating domain...
	I0817 22:24:15.472135  255215 main.go:141] libmachine: (embed-certs-437183) Waiting to get IP...
	I0817 22:24:15.473014  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.473413  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.473492  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.473421  256157 retry.go:31] will retry after 194.38634ms: waiting for machine to come up
	I0817 22:24:15.670047  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.670479  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.670528  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.670445  256157 retry.go:31] will retry after 332.988154ms: waiting for machine to come up
	I0817 22:24:16.005357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.005862  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.005898  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.005790  256157 retry.go:31] will retry after 376.364025ms: waiting for machine to come up
	I0817 22:24:16.384423  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.384866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.384916  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.384805  256157 retry.go:31] will retry after 392.048125ms: waiting for machine to come up
	I0817 22:24:16.778356  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.778744  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.778780  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.778683  256157 retry.go:31] will retry after 688.962088ms: waiting for machine to come up
	I0817 22:24:17.469767  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:17.470257  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:17.470287  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:17.470211  256157 retry.go:31] will retry after 660.617465ms: waiting for machine to come up
	I0817 22:24:15.561695  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:15.564750  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565097  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:15.565127  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565409  255057 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:15.569673  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:15.584980  255057 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:24:15.585030  255057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:15.617365  255057 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0817 22:24:15.617396  255057 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.0-rc.1 registry.k8s.io/kube-controller-manager:v1.28.0-rc.1 registry.k8s.io/kube-scheduler:v1.28.0-rc.1 registry.k8s.io/kube-proxy:v1.28.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:24:15.617470  255057 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.617497  255057 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.617529  255057 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.617606  255057 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.617541  255057 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.617637  255057 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0817 22:24:15.617507  255057 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.617985  255057 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619154  255057 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0817 22:24:15.619338  255057 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619355  255057 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.619350  255057 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.619369  255057 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.619335  255057 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.619381  255057 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.619414  255057 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.793551  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.793935  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.796339  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.797436  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.806385  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.813161  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0817 22:24:15.840200  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.935464  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.940863  255057 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0817 22:24:15.940940  255057 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.940881  255057 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" does not exist at hash "046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd" in container runtime
	I0817 22:24:15.941028  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.941031  255057 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.941115  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952609  255057 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" does not exist at hash "e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef" in container runtime
	I0817 22:24:15.952687  255057 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.952709  255057 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0817 22:24:15.952741  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952751  255057 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.952790  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.007640  255057 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" does not exist at hash "2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d" in container runtime
	I0817 22:24:16.007686  255057 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.007740  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099763  255057 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.0-rc.1" does not exist at hash "cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8" in container runtime
	I0817 22:24:16.099817  255057 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.099873  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099909  255057 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0817 22:24:16.099969  255057 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.099980  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:16.100019  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.100052  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:16.100127  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:16.100145  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:16.100198  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.105175  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.197301  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0817 22:24:16.197377  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197418  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197432  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197437  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.197476  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.197421  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:16.197520  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197535  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.214043  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0817 22:24:16.214189  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:16.225659  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1 (exists)
	I0817 22:24:16.225690  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225750  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225882  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.225973  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.229070  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1 (exists)
	I0817 22:24:16.229235  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1 (exists)
	I0817 22:24:16.258828  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0817 22:24:16.258905  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0817 22:24:16.258990  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0817 22:24:16.259013  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:18.132851  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:18.133243  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:18.133310  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:18.133225  256157 retry.go:31] will retry after 900.178694ms: waiting for machine to come up
	I0817 22:24:19.035179  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:19.035579  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:19.035615  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:19.035514  256157 retry.go:31] will retry after 1.198702878s: waiting for machine to come up
	I0817 22:24:20.236711  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:20.237240  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:20.237273  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:20.237201  256157 retry.go:31] will retry after 1.809846012s: waiting for machine to come up
	I0817 22:24:22.048866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:22.049357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:22.049392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:22.049300  256157 retry.go:31] will retry after 1.671738979s: waiting for machine to come up
	I0817 22:24:18.395405  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1: (2.169611406s)
	I0817 22:24:18.395443  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1 from cache
	I0817 22:24:18.395478  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (2.169478272s)
	I0817 22:24:18.395493  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.136469625s)
	I0817 22:24:18.395493  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:18.395509  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0817 22:24:18.395512  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1 (exists)
	I0817 22:24:18.395560  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:20.871009  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1: (2.475415377s)
	I0817 22:24:20.871043  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1 from cache
	I0817 22:24:20.871073  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:20.871129  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:23.722312  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:23.722829  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:23.722864  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:23.722757  256157 retry.go:31] will retry after 1.856182792s: waiting for machine to come up
	I0817 22:24:25.580432  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:25.580936  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:25.580969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:25.580873  256157 retry.go:31] will retry after 2.404448523s: waiting for machine to come up
	I0817 22:24:23.529377  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1: (2.658213494s)
	I0817 22:24:23.529418  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1 from cache
	I0817 22:24:23.529456  255057 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:23.529532  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:24.907071  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.377507339s)
	I0817 22:24:24.907105  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0817 22:24:24.907135  255057 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:24.907203  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:27.988784  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:27.989226  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:27.989252  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:27.989214  256157 retry.go:31] will retry after 4.145677854s: waiting for machine to come up
	I0817 22:24:32.139031  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139722  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has current primary IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139755  255215 main.go:141] libmachine: (embed-certs-437183) Found IP for machine: 192.168.39.186
	I0817 22:24:32.139768  255215 main.go:141] libmachine: (embed-certs-437183) Reserving static IP address...
	I0817 22:24:32.140361  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.140408  255215 main.go:141] libmachine: (embed-certs-437183) Reserved static IP address: 192.168.39.186
	I0817 22:24:32.140428  255215 main.go:141] libmachine: (embed-certs-437183) DBG | skip adding static IP to network mk-embed-certs-437183 - found existing host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"}
	I0817 22:24:32.140450  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Getting to WaitForSSH function...
	I0817 22:24:32.140465  255215 main.go:141] libmachine: (embed-certs-437183) Waiting for SSH to be available...
	I0817 22:24:32.142752  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143141  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.143192  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143343  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH client type: external
	I0817 22:24:32.143392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa (-rw-------)
	I0817 22:24:32.143431  255215 main.go:141] libmachine: (embed-certs-437183) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:32.143459  255215 main.go:141] libmachine: (embed-certs-437183) DBG | About to run SSH command:
	I0817 22:24:32.143475  255215 main.go:141] libmachine: (embed-certs-437183) DBG | exit 0
	I0817 22:24:32.246211  255215 main.go:141] libmachine: (embed-certs-437183) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:32.246582  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetConfigRaw
	I0817 22:24:32.247284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.249789  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250204  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.250237  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250567  255215 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/config.json ...
	I0817 22:24:32.250808  255215 machine.go:88] provisioning docker machine ...
	I0817 22:24:32.250831  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:32.251049  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251209  255215 buildroot.go:166] provisioning hostname "embed-certs-437183"
	I0817 22:24:32.251230  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251344  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.253729  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254094  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.254124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254276  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.254434  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254654  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254817  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.254981  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.255466  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.255481  255215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-437183 && echo "embed-certs-437183" | sudo tee /etc/hostname
	I0817 22:24:32.412247  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-437183
	
	I0817 22:24:32.412284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.415194  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415508  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.415561  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415666  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.415910  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416113  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416297  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.416501  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.417004  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.417024  255215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-437183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-437183/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-437183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:32.559200  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:32.559253  255215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:32.559282  255215 buildroot.go:174] setting up certificates
	I0817 22:24:32.559299  255215 provision.go:83] configureAuth start
	I0817 22:24:32.559313  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.559696  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.562469  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.562960  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.562989  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.563141  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.565760  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566120  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.566178  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566344  255215 provision.go:138] copyHostCerts
	I0817 22:24:32.566427  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:32.566443  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:32.566504  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:32.566633  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:32.566642  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:32.566676  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:32.566730  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:32.566738  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:32.566755  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:32.566803  255215 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-437183 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube embed-certs-437183]
	I0817 22:24:31.437386  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.530148826s)
	I0817 22:24:31.437453  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0817 22:24:31.437478  255057 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:31.437578  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:32.398228  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0817 22:24:32.398294  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:32.398359  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:33.487487  255491 start.go:369] acquired machines lock for "default-k8s-diff-port-321287" in 4m16.661569765s
	I0817 22:24:33.487552  255491 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:33.487569  255491 fix.go:54] fixHost starting: 
	I0817 22:24:33.488059  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:33.488104  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:33.506430  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0817 22:24:33.506958  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:33.507587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:24:33.507618  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:33.508078  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:33.508296  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:33.508471  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:24:33.510492  255491 fix.go:102] recreateIfNeeded on default-k8s-diff-port-321287: state=Stopped err=<nil>
	I0817 22:24:33.510539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	W0817 22:24:33.510738  255491 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:33.512965  255491 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-321287" ...
	I0817 22:24:32.687763  255215 provision.go:172] copyRemoteCerts
	I0817 22:24:32.687835  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:32.687864  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.690614  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.690921  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.690963  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.691253  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.691469  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.691631  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.691745  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:32.788388  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:32.811861  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:32.835407  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0817 22:24:32.858542  255215 provision.go:86] duration metric: configureAuth took 299.225654ms
	I0817 22:24:32.858581  255215 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:32.858850  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:32.858989  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.861726  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862140  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.862186  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862436  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.862717  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.862961  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.863135  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.863321  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.863744  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.863762  255215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:33.202904  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:33.202942  255215 machine.go:91] provisioned docker machine in 952.11856ms
	I0817 22:24:33.202986  255215 start.go:300] post-start starting for "embed-certs-437183" (driver="kvm2")
	I0817 22:24:33.203002  255215 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:33.203039  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.203427  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:33.203465  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.206544  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.206969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.207004  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.207154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.207407  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.207591  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.207747  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.304648  255215 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:33.309404  255215 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:33.309435  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:33.309536  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:33.309635  255215 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:33.309752  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:33.318682  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:33.343830  255215 start.go:303] post-start completed in 140.8201ms
	I0817 22:24:33.343870  255215 fix.go:56] fixHost completed within 19.220571855s
	I0817 22:24:33.343901  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.347196  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347625  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.347658  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347927  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.348154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348336  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348487  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.348741  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:33.349346  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:33.349361  255215 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:33.487290  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311073.433845199
	
	I0817 22:24:33.487319  255215 fix.go:206] guest clock: 1692311073.433845199
	I0817 22:24:33.487331  255215 fix.go:219] Guest: 2023-08-17 22:24:33.433845199 +0000 UTC Remote: 2023-08-17 22:24:33.343875474 +0000 UTC m=+290.714391364 (delta=89.969725ms)
	I0817 22:24:33.487370  255215 fix.go:190] guest clock delta is within tolerance: 89.969725ms
	I0817 22:24:33.487378  255215 start.go:83] releasing machines lock for "embed-certs-437183", held for 19.364124776s
	I0817 22:24:33.487412  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.487714  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:33.490444  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.490945  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.490975  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.491191  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492024  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492278  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492378  255215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:33.492440  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.492569  255215 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:33.492600  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.495461  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495742  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495836  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.495879  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.496130  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496147  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496287  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496341  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496445  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496604  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496605  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496792  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.496886  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.634234  255215 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:33.642529  255215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:33.802107  255215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:33.808439  255215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:33.808520  255215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:33.823947  255215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:33.823975  255215 start.go:466] detecting cgroup driver to use...
	I0817 22:24:33.824058  255215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:33.839665  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:33.854435  255215 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:33.854512  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:33.871530  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:33.886466  255215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:34.017312  255215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:34.152720  255215 docker.go:212] disabling docker service ...
	I0817 22:24:34.152811  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:34.170506  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:34.186072  255215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:34.327678  255215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:34.450774  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:34.468330  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:34.491610  255215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:34.491684  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.506266  255215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:34.506360  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.517471  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.531351  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.542363  255215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:34.553383  255215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:34.562937  255215 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:34.563029  255215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:34.575978  255215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:34.588500  255215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:34.715821  255215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:34.912771  255215 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:34.912853  255215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:34.918377  255215 start.go:534] Will wait 60s for crictl version
	I0817 22:24:34.918445  255215 ssh_runner.go:195] Run: which crictl
	I0817 22:24:34.922462  255215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:34.962654  255215 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:34.962754  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.020574  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.078516  255215 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:33.514448  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Start
	I0817 22:24:33.514667  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring networks are active...
	I0817 22:24:33.515504  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network default is active
	I0817 22:24:33.515973  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network mk-default-k8s-diff-port-321287 is active
	I0817 22:24:33.516607  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Getting domain xml...
	I0817 22:24:33.517407  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Creating domain...
	I0817 22:24:35.032992  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting to get IP...
	I0817 22:24:35.034213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034833  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.034747  256286 retry.go:31] will retry after 255.561446ms: waiting for machine to come up
	I0817 22:24:35.292497  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293071  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293110  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.293035  256286 retry.go:31] will retry after 265.433217ms: waiting for machine to come up
	I0817 22:24:35.560591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561221  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.561138  256286 retry.go:31] will retry after 429.726379ms: waiting for machine to come up
	I0817 22:24:35.993046  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993573  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.993482  256286 retry.go:31] will retry after 583.273043ms: waiting for machine to come up
	I0817 22:24:36.578452  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578943  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578983  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:36.578889  256286 retry.go:31] will retry after 504.577651ms: waiting for machine to come up
	I0817 22:24:35.080561  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:35.083955  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084338  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:35.084376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084624  255215 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:35.088994  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:35.104758  255215 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:35.104814  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:35.140529  255215 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:35.140606  255215 ssh_runner.go:195] Run: which lz4
	I0817 22:24:35.144869  255215 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:35.149131  255215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:35.149168  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:24:37.067793  255215 crio.go:444] Took 1.922962 seconds to copy over tarball
	I0817 22:24:37.067867  255215 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:34.276465  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (1.878070898s)
	I0817 22:24:34.276495  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1 from cache
	I0817 22:24:34.276528  255057 cache_images.go:123] Successfully loaded all cached images
	I0817 22:24:34.276535  255057 cache_images.go:92] LoadImages completed in 18.659123421s
	I0817 22:24:34.276651  255057 ssh_runner.go:195] Run: crio config
	I0817 22:24:34.349440  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:34.349470  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:34.349525  255057 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:34.349559  255057 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8443 KubernetesVersion:v1.28.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-525875 NodeName:no-preload-525875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:34.349737  255057 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-525875"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:34.349852  255057 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-525875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:34.349927  255057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0-rc.1
	I0817 22:24:34.361082  255057 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:34.361211  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:34.370571  255057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0817 22:24:34.390596  255057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 22:24:34.409602  255057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0817 22:24:34.431076  255057 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:34.435869  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:34.448753  255057 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875 for IP: 192.168.61.196
	I0817 22:24:34.448854  255057 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:34.449077  255057 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:34.449125  255057 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:34.449229  255057 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/client.key
	I0817 22:24:34.449287  255057 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key.0d67e2f2
	I0817 22:24:34.449320  255057 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key
	I0817 22:24:34.449438  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:34.449466  255057 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:34.449476  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:34.449499  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:34.449523  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:34.449545  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:34.449586  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:34.450600  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:34.481454  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:24:34.514638  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:34.539306  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:24:34.565390  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:34.595648  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:34.628105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:34.654925  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:34.684138  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:34.709433  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:34.736933  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:34.772217  255057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:34.790940  255057 ssh_runner.go:195] Run: openssl version
	I0817 22:24:34.800419  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:34.811545  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819623  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819697  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.825793  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:34.836531  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:34.847239  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852331  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852394  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.861659  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:34.871817  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:34.883257  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889654  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889728  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.897773  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:34.909259  255057 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:34.914775  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:34.921549  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:34.928370  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:34.934849  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:34.941470  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:34.949932  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:34.956863  255057 kubeadm.go:404] StartCluster: {Name:no-preload-525875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525
875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:34.957036  255057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:34.957123  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:35.005195  255057 cri.go:89] found id: ""
	I0817 22:24:35.005282  255057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:35.015727  255057 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:35.015754  255057 kubeadm.go:636] restartCluster start
	I0817 22:24:35.015821  255057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:35.025333  255057 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.026796  255057 kubeconfig.go:92] found "no-preload-525875" server: "https://192.168.61.196:8443"
	I0817 22:24:35.030361  255057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:35.040698  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.040754  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.055650  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.055675  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.055719  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.066812  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.567215  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.567291  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.580471  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.066958  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.067035  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.081758  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.567234  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.567320  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.582474  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.066970  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.067060  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.079066  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.567780  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.567887  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.583652  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.085672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086184  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086222  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.086130  256286 retry.go:31] will retry after 660.028004ms: waiting for machine to come up
	I0817 22:24:37.747563  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748056  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748086  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.748020  256286 retry.go:31] will retry after 798.952498ms: waiting for machine to come up
	I0817 22:24:38.548762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549243  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549276  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:38.549193  256286 retry.go:31] will retry after 1.15249289s: waiting for machine to come up
	I0817 22:24:39.703164  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703739  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703773  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:39.703675  256286 retry.go:31] will retry after 1.300284471s: waiting for machine to come up
	I0817 22:24:41.006289  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006781  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006814  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:41.006717  256286 retry.go:31] will retry after 1.500753962s: waiting for machine to come up
	I0817 22:24:40.155737  255215 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.087825588s)
	I0817 22:24:40.155771  255215 crio.go:451] Took 3.087946 seconds to extract the tarball
	I0817 22:24:40.155784  255215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:24:40.196940  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:40.238837  255215 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:24:40.238863  255215 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:24:40.238934  255215 ssh_runner.go:195] Run: crio config
	I0817 22:24:40.302526  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:24:40.302552  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:40.302572  255215 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:40.302593  255215 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-437183 NodeName:embed-certs-437183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:40.302793  255215 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-437183"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:40.302860  255215 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-437183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:40.302914  255215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:24:40.312428  255215 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:40.312517  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:40.321824  255215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0817 22:24:40.340069  255215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:24:40.358609  255215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0817 22:24:40.376546  255215 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:40.380576  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:40.394264  255215 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183 for IP: 192.168.39.186
	I0817 22:24:40.394310  255215 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:40.394509  255215 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:40.394569  255215 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:40.394678  255215 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/client.key
	I0817 22:24:40.394749  255215 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key.d0691019
	I0817 22:24:40.394810  255215 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key
	I0817 22:24:40.394956  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:40.394999  255215 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:40.395013  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:40.395056  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:40.395096  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:40.395127  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:40.395197  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:40.396122  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:40.421809  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:24:40.447412  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:40.472678  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:24:40.501303  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:40.528016  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:40.553741  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:40.581792  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:40.609270  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:40.634901  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:40.659698  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:40.685767  255215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:40.704114  255215 ssh_runner.go:195] Run: openssl version
	I0817 22:24:40.709921  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:40.720035  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725167  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725232  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.731054  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:40.741277  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:40.751649  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757538  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757621  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.763574  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:40.773786  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:40.784152  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790448  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790529  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.796689  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:40.806968  255215 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:40.811858  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:40.818172  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:40.824439  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:40.830588  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:40.836734  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:40.842857  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:40.849072  255215 kubeadm.go:404] StartCluster: {Name:embed-certs-437183 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:40.849208  255215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:40.849269  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:40.882040  255215 cri.go:89] found id: ""
	I0817 22:24:40.882132  255215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:40.893833  255215 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:40.893859  255215 kubeadm.go:636] restartCluster start
	I0817 22:24:40.893926  255215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:40.906498  255215 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.907768  255215 kubeconfig.go:92] found "embed-certs-437183" server: "https://192.168.39.186:8443"
	I0817 22:24:40.910282  255215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:40.921945  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.922021  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.933335  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.933360  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.933417  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.944168  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.444996  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.445109  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.457502  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.944752  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.944881  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.960929  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.444350  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.444464  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.461555  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.066927  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.067043  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.082831  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.567259  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.567347  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.581544  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.067112  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.067211  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.078859  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.566916  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.567075  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.582637  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.067188  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.067286  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.082771  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.567236  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.567331  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.583192  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.067806  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.067953  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.082962  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.567559  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.567664  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.582761  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.067267  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.067357  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.078631  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.567181  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.567299  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.583270  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.509044  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509662  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509688  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:42.509599  256286 retry.go:31] will retry after 2.726859315s: waiting for machine to come up
	I0817 22:24:45.239162  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239727  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239756  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:45.239667  256286 retry.go:31] will retry after 2.868820101s: waiting for machine to come up
	I0817 22:24:42.944983  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.945083  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.960949  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.444415  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.444541  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.460157  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.944659  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.944757  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.960506  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.444408  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.444544  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.460666  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.944252  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.944358  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.956137  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.444667  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.444779  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.460524  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.944710  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.945003  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.961038  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.444556  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.444684  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.459345  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.944760  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.944858  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.961217  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:47.444786  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.444935  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.460748  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.067683  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.067794  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.083038  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.567750  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.567850  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.579427  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.066928  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.067014  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.078671  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.567463  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.567559  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.579377  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.041151  255057 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:45.041202  255057 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:45.041218  255057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:45.041279  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:45.080480  255057 cri.go:89] found id: ""
	I0817 22:24:45.080569  255057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:45.096518  255057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:45.107778  255057 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:45.107880  255057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117115  255057 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117151  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.269517  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.790366  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.988106  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.124121  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.219342  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:46.219438  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.241849  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.795050  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.295314  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.795361  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.111566  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112173  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:48.112079  256286 retry.go:31] will retry after 3.129130141s: waiting for machine to come up
	I0817 22:24:51.245244  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245759  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245788  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:51.245707  256286 retry.go:31] will retry after 4.573749963s: waiting for machine to come up
	I0817 22:24:47.944303  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.944406  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.960613  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.445144  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.445245  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.460221  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.944726  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.944811  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.958575  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.444744  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.444875  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.460348  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.944986  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.945117  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.958396  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.445013  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:50.445110  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:50.459941  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.922423  255215 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:50.922493  255215 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:50.922513  255215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:50.922581  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:50.964064  255215 cri.go:89] found id: ""
	I0817 22:24:50.964154  255215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:50.980513  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:50.990086  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:50.990152  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999907  255215 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999935  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:51.147593  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.150655  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.002996323s)
	I0817 22:24:52.150694  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.367611  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.461186  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.534447  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:52.534547  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:52.551513  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.295087  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.794596  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.817042  255057 api_server.go:72] duration metric: took 2.597699698s to wait for apiserver process to appear ...
	I0817 22:24:48.817069  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:48.817086  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.817615  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:48.817653  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.818012  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:49.318894  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.160567  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.160612  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.160627  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.246065  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.246117  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.318300  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.394871  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.394932  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:52.818493  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.825349  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.825391  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.318277  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.324705  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:53.324751  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.818240  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.823823  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:24:53.834528  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:24:53.834573  255057 api_server.go:131] duration metric: took 5.01749639s to wait for apiserver health ...
	I0817 22:24:53.834586  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:53.834596  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:53.836827  255057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:53.838602  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:24:53.850880  255057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:24:53.871556  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:24:53.886793  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:24:53.886858  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:24:53.886875  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:24:53.886889  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:24:53.886902  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:24:53.886922  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:24:53.886939  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:24:53.886948  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:24:53.886961  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:24:53.886975  255057 system_pods.go:74] duration metric: took 15.392207ms to wait for pod list to return data ...
	I0817 22:24:53.886988  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:24:53.891527  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:24:53.891589  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:24:53.891630  255057 node_conditions.go:105] duration metric: took 4.635197ms to run NodePressure ...
	I0817 22:24:53.891656  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:54.230065  255057 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239113  255057 kubeadm.go:787] kubelet initialised
	I0817 22:24:54.239146  255057 kubeadm.go:788] duration metric: took 9.048225ms waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239159  255057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:54.251454  255057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.266584  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266619  255057 pod_ready.go:81] duration metric: took 15.127554ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.266633  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266645  255057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.278901  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278932  255057 pod_ready.go:81] duration metric: took 12.266962ms waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.278944  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278952  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.297982  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298020  255057 pod_ready.go:81] duration metric: took 19.058778ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.298032  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298047  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.309929  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309967  255057 pod_ready.go:81] duration metric: took 11.898508ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.309980  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309991  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.676448  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676495  255057 pod_ready.go:81] duration metric: took 366.48994ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.676507  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676547  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.078351  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078392  255057 pod_ready.go:81] duration metric: took 401.831269ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.078405  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078416  255057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.476059  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476101  255057 pod_ready.go:81] duration metric: took 397.677369ms waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.476111  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476121  255057 pod_ready.go:38] duration metric: took 1.236947103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:55.476143  255057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:24:55.487413  255057 ops.go:34] apiserver oom_adj: -16
	I0817 22:24:55.487448  255057 kubeadm.go:640] restartCluster took 20.471686915s
	I0817 22:24:55.487459  255057 kubeadm.go:406] StartCluster complete in 20.530629906s
	I0817 22:24:55.487482  255057 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.487591  255057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:24:55.489799  255057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.490091  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:24:55.490202  255057 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:24:55.490349  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:55.490375  255057 addons.go:69] Setting storage-provisioner=true in profile "no-preload-525875"
	I0817 22:24:55.490380  255057 addons.go:69] Setting metrics-server=true in profile "no-preload-525875"
	I0817 22:24:55.490397  255057 addons.go:231] Setting addon storage-provisioner=true in "no-preload-525875"
	I0817 22:24:55.490404  255057 addons.go:231] Setting addon metrics-server=true in "no-preload-525875"
	W0817 22:24:55.490409  255057 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:24:55.490435  255057 addons.go:69] Setting default-storageclass=true in profile "no-preload-525875"
	I0817 22:24:55.490465  255057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-525875"
	I0817 22:24:55.490474  255057 host.go:66] Checking if "no-preload-525875" exists ...
	W0817 22:24:55.490413  255057 addons.go:240] addon metrics-server should already be in state true
	I0817 22:24:55.490547  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.491607  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.491742  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492181  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492232  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492255  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492291  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.503335  255057 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-525875" context rescaled to 1 replicas
	I0817 22:24:55.503399  255057 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:24:55.505836  255057 out.go:177] * Verifying Kubernetes components...
	I0817 22:24:55.507438  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:24:55.512841  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0817 22:24:55.513126  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0817 22:24:55.513241  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42607
	I0817 22:24:55.513441  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513567  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513770  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.514042  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514082  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514128  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514159  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514577  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514595  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514708  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514733  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514804  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.515081  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.515186  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515223  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.515651  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515699  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.532135  255057 addons.go:231] Setting addon default-storageclass=true in "no-preload-525875"
	W0817 22:24:55.532171  255057 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:24:55.532205  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.532614  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.532665  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.535464  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I0817 22:24:55.537205  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:24:55.537544  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.537676  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.538005  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538022  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538197  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538209  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538328  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538574  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538694  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.538757  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.540907  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.541221  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.543481  255057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:55.545233  255057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:24:55.820955  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.821534  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Found IP for machine: 192.168.50.30
	I0817 22:24:55.821557  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserving static IP address...
	I0817 22:24:55.821590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has current primary IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.822134  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.822169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | skip adding static IP to network mk-default-k8s-diff-port-321287 - found existing host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"}
	I0817 22:24:55.822189  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Getting to WaitForSSH function...
	I0817 22:24:55.822212  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserved static IP address: 192.168.50.30
	I0817 22:24:55.822225  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for SSH to be available...
	I0817 22:24:55.825198  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.825630  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825769  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH client type: external
	I0817 22:24:55.825802  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa (-rw-------)
	I0817 22:24:55.825837  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:55.825855  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | About to run SSH command:
	I0817 22:24:55.825874  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | exit 0
	I0817 22:24:55.923224  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:55.923669  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetConfigRaw
	I0817 22:24:55.924434  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:55.927453  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.927935  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.927987  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.928304  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:24:55.928581  255491 machine.go:88] provisioning docker machine ...
	I0817 22:24:55.928610  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:55.928818  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.928963  255491 buildroot.go:166] provisioning hostname "default-k8s-diff-port-321287"
	I0817 22:24:55.928984  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.929169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:55.931672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932179  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.932213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:55.932606  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.932862  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.933008  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:55.933228  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:55.933895  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:55.933917  255491 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-321287 && echo "default-k8s-diff-port-321287" | sudo tee /etc/hostname
	I0817 22:24:56.066560  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-321287
	
	I0817 22:24:56.066599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.070072  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070509  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.070590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070901  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.071175  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071377  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071589  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.071813  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.072479  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.072511  255491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-321287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-321287/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-321287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:56.210857  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:56.210897  255491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:56.210954  255491 buildroot.go:174] setting up certificates
	I0817 22:24:56.210968  255491 provision.go:83] configureAuth start
	I0817 22:24:56.210981  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:56.211435  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:56.214305  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214711  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.214762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214931  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.217766  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218200  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.218245  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218444  255491 provision.go:138] copyHostCerts
	I0817 22:24:56.218519  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:56.218533  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:56.218609  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:56.218728  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:56.218738  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:56.218769  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:56.218846  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:56.218856  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:56.218886  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:56.218953  255491 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-321287 san=[192.168.50.30 192.168.50.30 localhost 127.0.0.1 minikube default-k8s-diff-port-321287]
	I0817 22:24:56.289985  255491 provision.go:172] copyRemoteCerts
	I0817 22:24:56.290068  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:56.290104  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.293536  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.293996  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.294027  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.294218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.294456  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.294675  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.294866  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.386746  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:56.413448  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 22:24:56.438758  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 22:24:56.467489  255491 provision.go:86] duration metric: configureAuth took 256.504259ms
	I0817 22:24:56.467525  255491 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:56.467792  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:56.467917  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.470870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.471373  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471601  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.471839  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472048  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.472441  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.473139  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.473162  255491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:57.100503  254975 start.go:369] acquired machines lock for "old-k8s-version-294781" in 57.735745135s
	I0817 22:24:57.100571  254975 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:57.100583  254975 fix.go:54] fixHost starting: 
	I0817 22:24:57.101120  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:57.101172  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:57.121393  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0817 22:24:57.122017  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:57.122807  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:24:57.122834  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:57.123289  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:57.123463  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:24:57.123584  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:24:57.125545  254975 fix.go:102] recreateIfNeeded on old-k8s-version-294781: state=Stopped err=<nil>
	I0817 22:24:57.125580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	W0817 22:24:57.125759  254975 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:57.127853  254975 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-294781" ...
	I0817 22:24:55.546816  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:24:55.546839  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:24:55.546870  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.545324  255057 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.546955  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:24:55.546971  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.551364  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552354  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552580  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
	I0817 22:24:55.552920  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.552950  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553052  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.553160  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553171  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.553238  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553408  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553592  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553747  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553751  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553805  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.553823  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.553914  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553952  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554237  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.554648  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554839  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.554878  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.594781  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0817 22:24:55.595253  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.595928  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.595955  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.596358  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.596659  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.598866  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.599111  255057 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.599123  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:24:55.599141  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.602520  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.602895  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.602924  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.603114  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.603334  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.603537  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.603678  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.693508  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:24:55.693535  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:24:55.720303  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.739691  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:24:55.739725  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:24:55.752809  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.793480  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:55.793512  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:24:55.805075  255057 node_ready.go:35] waiting up to 6m0s for node "no-preload-525875" to be "Ready" ...
	I0817 22:24:55.805164  255057 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 22:24:55.834328  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:57.451781  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.731427598s)
	I0817 22:24:57.451824  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.698971636s)
	I0817 22:24:57.451845  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451859  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.451876  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451887  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452756  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.452808  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.452818  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.452832  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.452842  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452965  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453000  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453009  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453019  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453027  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453173  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453247  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453270  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453295  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453306  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453677  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453709  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453720  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.455299  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.455300  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.455325  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.564475  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.730071346s)
	I0817 22:24:57.564539  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.564551  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565087  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565160  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565170  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565185  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.565217  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565483  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565530  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565539  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565550  255057 addons.go:467] Verifying addon metrics-server=true in "no-preload-525875"
	I0817 22:24:57.569420  255057 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:24:53.063998  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:53.564081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.064081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.564321  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.064476  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.090168  255215 api_server.go:72] duration metric: took 2.555721263s to wait for apiserver process to appear ...
	I0817 22:24:55.090200  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:55.090223  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:57.571712  255057 addons.go:502] enable addons completed in 2.081503451s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:24:57.882753  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:56.835353  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:56.835388  255491 machine.go:91] provisioned docker machine in 906.787255ms
	I0817 22:24:56.835401  255491 start.go:300] post-start starting for "default-k8s-diff-port-321287" (driver="kvm2")
	I0817 22:24:56.835415  255491 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:56.835460  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:56.835881  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:56.835925  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.838868  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839240  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.839274  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839366  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.839581  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.839808  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.839994  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.932979  255491 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:56.937642  255491 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:56.937675  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:56.937770  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:56.937877  255491 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:56.938003  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:56.949478  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:56.975557  255491 start.go:303] post-start completed in 140.136722ms
	I0817 22:24:56.975589  255491 fix.go:56] fixHost completed within 23.488019817s
	I0817 22:24:56.975618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.979039  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979486  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.979549  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979673  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.979951  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980152  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980301  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.980507  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.981194  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.981211  255491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:57.100308  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311097.042275817
	
	I0817 22:24:57.100341  255491 fix.go:206] guest clock: 1692311097.042275817
	I0817 22:24:57.100351  255491 fix.go:219] Guest: 2023-08-17 22:24:57.042275817 +0000 UTC Remote: 2023-08-17 22:24:56.975593678 +0000 UTC m=+280.298176937 (delta=66.682139ms)
	I0817 22:24:57.100389  255491 fix.go:190] guest clock delta is within tolerance: 66.682139ms
	I0817 22:24:57.100396  255491 start.go:83] releasing machines lock for "default-k8s-diff-port-321287", held for 23.61286841s
	I0817 22:24:57.100436  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.100813  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:57.104312  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.104719  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.104807  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.105050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105744  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105949  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.106081  255491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:57.106133  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.106268  255491 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:57.106395  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.110145  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110531  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.110577  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.111166  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.111352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.111402  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.111567  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.112700  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.112751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.112980  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.113206  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.113379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.113534  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.200530  255491 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:57.232758  255491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:57.405574  255491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:57.413543  255491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:57.413637  255491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:57.438687  255491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:57.438718  255491 start.go:466] detecting cgroup driver to use...
	I0817 22:24:57.438808  255491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:57.458572  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:57.475320  255491 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:57.475397  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:57.493585  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:57.512274  255491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:57.650975  255491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:57.788299  255491 docker.go:212] disabling docker service ...
	I0817 22:24:57.788395  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:57.806350  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:57.819894  255491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:57.966925  255491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:58.088274  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:58.107210  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:58.129691  255491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:58.129766  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.141217  255491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:58.141388  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.153376  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.166177  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.177326  255491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:58.191627  255491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:58.203913  255491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:58.204001  255491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:58.222901  255491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:58.233280  255491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:58.366794  255491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:58.603364  255491 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:58.603462  255491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:58.616285  255491 start.go:534] Will wait 60s for crictl version
	I0817 22:24:58.616397  255491 ssh_runner.go:195] Run: which crictl
	I0817 22:24:58.622933  255491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:58.668866  255491 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:58.668961  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.735680  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.800442  255491 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:59.550327  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.550367  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:59.550385  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:59.646890  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.646928  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:00.147486  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.160700  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.160745  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:00.647077  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.685626  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.685678  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.147134  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.156042  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:01.156083  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.647569  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.657291  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:25:01.686204  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:01.686260  255215 api_server.go:131] duration metric: took 6.59605111s to wait for apiserver health ...
	I0817 22:25:01.686274  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:25:01.686283  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:01.688856  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:58.802321  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:58.806172  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.806661  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:58.806696  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.807029  255491 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:58.813045  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:58.830937  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:58.831008  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:58.880355  255491 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:58.880469  255491 ssh_runner.go:195] Run: which lz4
	I0817 22:24:58.886729  255491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:58.893418  255491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:58.893496  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:25:01.093233  255491 crio.go:444] Took 2.206544 seconds to copy over tarball
	I0817 22:25:01.093422  255491 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:57.129390  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Start
	I0817 22:24:57.134160  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring networks are active...
	I0817 22:24:57.134190  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network default is active
	I0817 22:24:57.134205  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network mk-old-k8s-version-294781 is active
	I0817 22:24:57.134214  254975 main.go:141] libmachine: (old-k8s-version-294781) Getting domain xml...
	I0817 22:24:57.134228  254975 main.go:141] libmachine: (old-k8s-version-294781) Creating domain...
	I0817 22:24:58.694125  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting to get IP...
	I0817 22:24:58.695714  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:58.696209  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:58.696356  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:58.696219  256493 retry.go:31] will retry after 307.640559ms: waiting for machine to come up
	I0817 22:24:59.006214  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.008497  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.008536  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.006931  256493 retry.go:31] will retry after 316.904618ms: waiting for machine to come up
	I0817 22:24:59.325929  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.326634  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.326672  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.326593  256493 retry.go:31] will retry after 466.068046ms: waiting for machine to come up
	I0817 22:24:59.794718  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.795268  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.795294  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.795200  256493 retry.go:31] will retry after 399.064857ms: waiting for machine to come up
	I0817 22:25:00.196015  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.196733  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.196760  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.196632  256493 retry.go:31] will retry after 553.183294ms: waiting for machine to come up
	I0817 22:25:00.751687  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.752341  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.752366  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.752283  256493 retry.go:31] will retry after 815.149471ms: waiting for machine to come up
	I0817 22:25:01.568847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:01.569679  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:01.569709  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:01.569547  256493 retry.go:31] will retry after 827.38414ms: waiting for machine to come up
	I0817 22:25:01.690788  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:01.726335  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:01.804837  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:01.844074  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:01.844121  255215 system_pods.go:61] "coredns-5d78c9869d-twvdv" [f8305fa5-f0e7-4090-af8f-a9eefe00be65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:01.844134  255215 system_pods.go:61] "etcd-embed-certs-437183" [409212ae-25eb-4221-b380-d73562531eb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:01.844143  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [a378c1e7-c439-427f-b56e-7aeb2397dda2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:01.844149  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [7d8c33ff-f8bd-4ca8-a1cd-7e03a3c1ea55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:01.844156  255215 system_pods.go:61] "kube-proxy-tqlkl" [3dc68d59-da16-4a8e-8664-24c280769e22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:01.844162  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [54addcee-6a78-4a9d-9b15-a02e79ac92be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:01.844169  255215 system_pods.go:61] "metrics-server-74d5c6b9c-h5tt6" [6f8a838b-81d8-444d-aba1-fe46fefe8815] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:01.844175  255215 system_pods.go:61] "storage-provisioner" [65cd2cbe-dcb1-4842-af27-551c8d0a93d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:01.844182  255215 system_pods.go:74] duration metric: took 39.323312ms to wait for pod list to return data ...
	I0817 22:25:01.844194  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:01.857431  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:01.857471  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:01.857485  255215 node_conditions.go:105] duration metric: took 13.285661ms to run NodePressure ...
	I0817 22:25:01.857511  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:02.318085  255215 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329089  255215 kubeadm.go:787] kubelet initialised
	I0817 22:25:02.329122  255215 kubeadm.go:788] duration metric: took 10.998414ms waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329133  255215 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.338233  255215 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:59.891549  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.386499  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.889146  255057 node_ready.go:49] node "no-preload-525875" has status "Ready":"True"
	I0817 22:25:02.889193  255057 node_ready.go:38] duration metric: took 7.084075756s waiting for node "no-preload-525875" to be "Ready" ...
	I0817 22:25:02.889209  255057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.915138  255057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926622  255057 pod_ready.go:92] pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:02.926662  255057 pod_ready.go:81] duration metric: took 11.479543ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926677  255057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.597215  255491 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.503742232s)
	I0817 22:25:04.597254  255491 crio.go:451] Took 3.503924 seconds to extract the tarball
	I0817 22:25:04.597269  255491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:04.640799  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:04.683452  255491 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:25:04.683478  255491 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:25:04.683564  255491 ssh_runner.go:195] Run: crio config
	I0817 22:25:04.755546  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:04.755579  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:04.755618  255491 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:04.755646  255491 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8444 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-321287 NodeName:default-k8s-diff-port-321287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:25:04.755865  255491 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-321287"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:04.755964  255491 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-321287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0817 22:25:04.756040  255491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:25:04.768800  255491 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:04.768884  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:04.779179  255491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0817 22:25:04.798848  255491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:04.818088  255491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0817 22:25:04.839021  255491 ssh_runner.go:195] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:04.843996  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:04.858954  255491 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287 for IP: 192.168.50.30
	I0817 22:25:04.858992  255491 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:04.859193  255491 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:04.859263  255491 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:04.859371  255491 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/client.key
	I0817 22:25:04.859452  255491 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key.2a920f45
	I0817 22:25:04.859519  255491 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key
	I0817 22:25:04.859673  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:04.859717  255491 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:04.859733  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:04.859766  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:04.859800  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:04.859839  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:04.859901  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:04.860739  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:04.893191  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:25:04.923817  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:04.953192  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:25:04.985353  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:05.015743  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:05.043565  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:05.072283  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:05.102360  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:05.131090  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:05.158164  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:05.183921  255491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:05.201231  255491 ssh_runner.go:195] Run: openssl version
	I0817 22:25:05.207477  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:05.218696  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224473  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224551  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.230753  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:05.244810  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:05.255480  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.260972  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.261054  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.267724  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:05.280466  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:05.291975  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298403  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298519  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.306541  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:05.318878  255491 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:05.324755  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:05.333167  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:05.341869  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:05.350173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:05.357173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:05.364289  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:05.372301  255491 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-
k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:05.372435  255491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:05.372493  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:05.409127  255491 cri.go:89] found id: ""
	I0817 22:25:05.409211  255491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:05.420288  255491 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:05.420316  255491 kubeadm.go:636] restartCluster start
	I0817 22:25:05.420401  255491 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:05.431336  255491 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.433035  255491 kubeconfig.go:92] found "default-k8s-diff-port-321287" server: "https://192.168.50.30:8444"
	I0817 22:25:05.437153  255491 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:05.446894  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.446956  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.459319  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.459353  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.459412  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.472543  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.973294  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.973386  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.986474  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.473007  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.473141  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.485870  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:02.398531  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:02.399142  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:02.399174  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:02.399045  256493 retry.go:31] will retry after 1.143040413s: waiting for machine to come up
	I0817 22:25:03.543421  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:03.544040  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:03.544076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:03.543971  256493 retry.go:31] will retry after 1.654291601s: waiting for machine to come up
	I0817 22:25:05.200880  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:05.201405  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:05.201435  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:05.201350  256493 retry.go:31] will retry after 1.752048888s: waiting for machine to come up
	I0817 22:25:04.379203  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.872822  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:04.499009  255057 pod_ready.go:92] pod "etcd-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.499040  255057 pod_ready.go:81] duration metric: took 1.572354603s waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.499057  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761691  255057 pod_ready.go:92] pod "kube-apiserver-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.761719  255057 pod_ready.go:81] duration metric: took 262.653075ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761734  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769937  255057 pod_ready.go:92] pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.769968  255057 pod_ready.go:81] duration metric: took 8.225874ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769983  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881406  255057 pod_ready.go:92] pod "kube-proxy-pzpk2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.881444  255057 pod_ready.go:81] duration metric: took 111.452654ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881461  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643623  255057 pod_ready.go:92] pod "kube-scheduler-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:05.643648  255057 pod_ready.go:81] duration metric: took 762.178998ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643658  255057 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:07.695130  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.972803  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.972898  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.985259  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.473416  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.473551  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.485378  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.973567  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.973708  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.989454  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.472762  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.472894  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.489910  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.972732  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.972822  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.984958  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.473569  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.473709  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.490412  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.972908  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.972987  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.986072  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.473333  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.473429  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.485656  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.973314  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.973423  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.989391  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:11.472953  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.473077  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.485192  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.956350  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:06.956874  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:06.956904  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:06.956830  256493 retry.go:31] will retry after 2.09338178s: waiting for machine to come up
	I0817 22:25:09.052006  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:09.052516  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:09.052549  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:09.052447  256493 retry.go:31] will retry after 3.023234706s: waiting for machine to come up
	I0817 22:25:08.877674  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:09.370723  255215 pod_ready.go:92] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:09.370754  255215 pod_ready.go:81] duration metric: took 7.032445075s waiting for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:09.370767  255215 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893038  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:10.893076  255215 pod_ready.go:81] duration metric: took 1.522300039s waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893091  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918300  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:11.918330  255215 pod_ready.go:81] duration metric: took 1.025229003s waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918347  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.192198  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:12.692398  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:11.973001  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.973083  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.984794  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.473426  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.473527  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.489566  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.972736  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.972840  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.984972  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.473572  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.473665  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.485760  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.972804  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.972952  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.984788  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.473423  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.473501  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.484892  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.973394  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.973481  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.985492  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:15.447933  255491 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:15.447967  255491 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:15.447983  255491 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:15.448044  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:15.483471  255491 cri.go:89] found id: ""
	I0817 22:25:15.483596  255491 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:15.500292  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:15.510630  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:15.510695  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520738  255491 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520771  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:15.635683  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:12.079485  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:12.080041  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:12.080069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:12.079986  256493 retry.go:31] will retry after 4.097355523s: waiting for machine to come up
	I0817 22:25:16.178550  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:16.179032  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:16.179063  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:16.178988  256493 retry.go:31] will retry after 4.178327275s: waiting for machine to come up
	I0817 22:25:14.176089  255215 pod_ready.go:102] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:14.679850  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.679881  255215 pod_ready.go:81] duration metric: took 2.761525031s waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.679894  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685308  255215 pod_ready.go:92] pod "kube-proxy-tqlkl" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.685339  255215 pod_ready.go:81] duration metric: took 5.435708ms waiting for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685352  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967073  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.967099  255215 pod_ready.go:81] duration metric: took 281.740411ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967110  255215 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:17.277033  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:15.190295  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:17.193522  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:16.723896  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0881723s)
	I0817 22:25:16.723933  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:16.940953  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.025208  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.110784  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:17.110880  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.123610  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.645363  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.145697  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.645211  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.145515  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.645764  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.665892  255491 api_server.go:72] duration metric: took 2.555110324s to wait for apiserver process to appear ...
	I0817 22:25:19.665920  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:19.665938  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:20.359726  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360375  254975 main.go:141] libmachine: (old-k8s-version-294781) Found IP for machine: 192.168.72.56
	I0817 22:25:20.360408  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserving static IP address...
	I0817 22:25:20.360426  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has current primary IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360798  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserved static IP address: 192.168.72.56
	I0817 22:25:20.360843  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.360866  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting for SSH to be available...
	I0817 22:25:20.360898  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | skip adding static IP to network mk-old-k8s-version-294781 - found existing host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"}
	I0817 22:25:20.360918  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Getting to WaitForSSH function...
	I0817 22:25:20.363319  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.363721  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.363767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.364016  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH client type: external
	I0817 22:25:20.364069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa (-rw-------)
	I0817 22:25:20.364115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:25:20.364135  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | About to run SSH command:
	I0817 22:25:20.364175  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | exit 0
	I0817 22:25:20.454327  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | SSH cmd err, output: <nil>: 
	I0817 22:25:20.454772  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetConfigRaw
	I0817 22:25:20.455585  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.458846  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.459420  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459910  254975 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/config.json ...
	I0817 22:25:20.460207  254975 machine.go:88] provisioning docker machine ...
	I0817 22:25:20.460240  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:20.460489  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460712  254975 buildroot.go:166] provisioning hostname "old-k8s-version-294781"
	I0817 22:25:20.460743  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460912  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.463811  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464166  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.464216  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464391  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.464610  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464779  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464936  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.465157  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.465566  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.465578  254975 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-294781 && echo "old-k8s-version-294781" | sudo tee /etc/hostname
	I0817 22:25:20.604184  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-294781
	
	I0817 22:25:20.604223  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.607313  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.607668  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.607706  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.608091  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.608335  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608511  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608656  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.608845  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.609344  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.609368  254975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-294781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-294781/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-294781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:25:20.731574  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:25:20.731639  254975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:25:20.731679  254975 buildroot.go:174] setting up certificates
	I0817 22:25:20.731697  254975 provision.go:83] configureAuth start
	I0817 22:25:20.731717  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.732057  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.735344  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.735748  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.735780  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.736038  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.738896  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739346  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.739384  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739562  254975 provision.go:138] copyHostCerts
	I0817 22:25:20.739634  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:25:20.739650  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:25:20.739733  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:25:20.739875  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:25:20.739889  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:25:20.739921  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:25:20.740027  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:25:20.740040  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:25:20.740069  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:25:20.740159  254975 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-294781 san=[192.168.72.56 192.168.72.56 localhost 127.0.0.1 minikube old-k8s-version-294781]
	I0817 22:25:20.937408  254975 provision.go:172] copyRemoteCerts
	I0817 22:25:20.937480  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:25:20.937508  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.940609  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941074  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.941115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941294  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.941469  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.941678  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.941899  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.033976  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:25:21.062438  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 22:25:21.090325  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:25:21.116263  254975 provision.go:86] duration metric: configureAuth took 384.54455ms
	I0817 22:25:21.116295  254975 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:25:21.116550  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:25:21.116667  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.119767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120295  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.120351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.120735  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.120898  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.121114  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.121330  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.121982  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.122011  254975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:25:21.449644  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:25:21.449675  254975 machine.go:91] provisioned docker machine in 989.449203ms
	I0817 22:25:21.449686  254975 start.go:300] post-start starting for "old-k8s-version-294781" (driver="kvm2")
	I0817 22:25:21.449696  254975 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:25:21.449713  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.450065  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:25:21.450112  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.453436  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.453847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.453893  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.454092  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.454320  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.454501  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.454682  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.544501  254975 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:25:21.549102  254975 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:25:21.549128  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:25:21.549201  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:25:21.549301  254975 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:25:21.549425  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:25:21.559169  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:21.585459  254975 start.go:303] post-start completed in 135.754284ms
	I0817 22:25:21.585496  254975 fix.go:56] fixHost completed within 24.48491231s
	I0817 22:25:21.585531  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.588650  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589045  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.589076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589236  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.589445  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589638  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589810  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.590026  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.590596  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.590621  254975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:25:21.704138  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311121.622295369
	
	I0817 22:25:21.704162  254975 fix.go:206] guest clock: 1692311121.622295369
	I0817 22:25:21.704170  254975 fix.go:219] Guest: 2023-08-17 22:25:21.622295369 +0000 UTC Remote: 2023-08-17 22:25:21.585502401 +0000 UTC m=+364.810906249 (delta=36.792968ms)
	I0817 22:25:21.704193  254975 fix.go:190] guest clock delta is within tolerance: 36.792968ms
	I0817 22:25:21.704200  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 24.603659499s
	I0817 22:25:21.704228  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.704524  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:21.707198  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707512  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.707555  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707715  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708285  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708516  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708605  254975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:25:21.708670  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.708790  254975 ssh_runner.go:195] Run: cat /version.json
	I0817 22:25:21.708816  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.711462  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711744  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711858  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.711906  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712090  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712154  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.712219  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712326  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712347  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712539  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712541  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712749  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712766  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.712936  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:19.775731  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.777036  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:19.693695  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:22.189616  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.818518  254975 ssh_runner.go:195] Run: systemctl --version
	I0817 22:25:21.824498  254975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:25:21.971461  254975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:25:21.978188  254975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:25:21.978271  254975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:25:21.993704  254975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:25:21.993738  254975 start.go:466] detecting cgroup driver to use...
	I0817 22:25:21.993820  254975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:25:22.009074  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:25:22.022874  254975 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:25:22.022935  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:25:22.036508  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:25:22.050919  254975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:25:22.174894  254975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:25:22.307776  254975 docker.go:212] disabling docker service ...
	I0817 22:25:22.307863  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:25:22.322017  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:25:22.334550  254975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:25:22.439721  254975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:25:22.554591  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:25:22.570460  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:25:22.588685  254975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0817 22:25:22.588767  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.599716  254975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:25:22.599801  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.611990  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.623873  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.636093  254975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:25:22.647438  254975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:25:22.657266  254975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:25:22.657338  254975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:25:22.672463  254975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:25:22.683508  254975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:25:22.799912  254975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:25:22.995704  254975 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:25:22.995816  254975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:25:23.003199  254975 start.go:534] Will wait 60s for crictl version
	I0817 22:25:23.003280  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:23.008350  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:25:23.042651  254975 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:25:23.042763  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.093624  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.142140  254975 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0817 22:25:24.666188  255491 api_server.go:269] stopped: https://192.168.50.30:8444/healthz: Get "https://192.168.50.30:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:24.666264  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:24.903729  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:24.903775  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:25.404125  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.420215  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.420261  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:25.903943  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.914463  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.914514  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:26.403966  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:26.414021  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:25:26.437708  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:26.437750  255491 api_server.go:131] duration metric: took 6.771821605s to wait for apiserver health ...
	I0817 22:25:26.437779  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:26.437789  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:26.440095  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:26.441921  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:26.469640  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:26.514785  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:26.532553  255491 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:26.532616  255491 system_pods.go:61] "coredns-5d78c9869d-v74x9" [1c42e9be-16fa-47c2-ab04-9ec805320760] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:26.532631  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [a3655572-9d89-4ef6-85db-85dc454d1021] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:26.532659  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [6786ac16-78df-4909-8542-0952af5beff6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:26.532675  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [ac8085d0-db9c-4229-b816-4753b7cfcae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:26.532686  255491 system_pods.go:61] "kube-proxy-4d9dx" [22447888-6570-47b7-baac-a5842688de9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:26.532697  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [bfcfc726-e659-4cb9-ad36-9887ddfaf170] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:26.532713  255491 system_pods.go:61] "metrics-server-74d5c6b9c-25l6w" [205dcf88-9d10-416b-8fd0-c93939208c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:26.532722  255491 system_pods.go:61] "storage-provisioner" [be486251-ebb9-4d0b-85c9-fe04e76634e3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:26.532738  255491 system_pods.go:74] duration metric: took 17.92531ms to wait for pod list to return data ...
	I0817 22:25:26.532751  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:26.541133  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:26.541180  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:26.541197  255491 node_conditions.go:105] duration metric: took 8.431415ms to run NodePressure ...
	I0817 22:25:26.541228  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:23.143729  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:23.146678  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147145  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:23.147178  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147433  254975 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0817 22:25:23.151860  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:23.165714  254975 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 22:25:23.165805  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:23.207234  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:23.207334  254975 ssh_runner.go:195] Run: which lz4
	I0817 22:25:23.211497  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:25:23.216272  254975 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:25:23.216309  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0817 22:25:25.170164  254975 crio.go:444] Took 1.958697 seconds to copy over tarball
	I0817 22:25:25.170253  254975 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:25:23.792764  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.276276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:24.193719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.692837  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.873863  255491 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:26.878982  255491 kubeadm.go:787] kubelet initialised
	I0817 22:25:26.879005  255491 kubeadm.go:788] duration metric: took 5.10797ms waiting for restarted kubelet to initialise ...
	I0817 22:25:26.879014  255491 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:26.885772  255491 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:29.448692  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:28.464409  254975 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.294096057s)
	I0817 22:25:28.464448  254975 crio.go:451] Took 3.294247 seconds to extract the tarball
	I0817 22:25:28.464461  254975 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:28.505546  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:28.550245  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:28.550282  254975 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:25:28.550393  254975 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.550419  254975 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.550425  254975 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.550466  254975 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.550416  254975 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.550388  254975 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.550543  254975 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0817 22:25:28.550382  254975 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551670  254975 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551673  254975 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.551765  254975 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.551779  254975 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.551793  254975 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0817 22:25:28.551814  254975 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.551841  254975 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.552852  254975 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.736900  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.746950  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.747215  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.749256  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.754813  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0817 22:25:28.767639  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.778459  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.834796  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.845176  254975 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0817 22:25:28.845233  254975 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.845295  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.896784  254975 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0817 22:25:28.896843  254975 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.896901  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919129  254975 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0817 22:25:28.919247  254975 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.919192  254975 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0817 22:25:28.919301  254975 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.919320  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919332  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972779  254975 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0817 22:25:28.972831  254975 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0817 22:25:28.972863  254975 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0817 22:25:28.972898  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972901  254975 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.973013  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.986909  254975 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0817 22:25:28.986957  254975 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.987007  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:29.083047  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:29.083137  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:29.083204  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:29.083276  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0817 22:25:29.083227  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0817 22:25:29.083354  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:29.083408  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:29.214678  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0817 22:25:29.214743  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0817 22:25:29.214777  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0817 22:25:29.214847  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0817 22:25:29.214934  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.221086  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0817 22:25:29.221101  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0817 22:25:29.221162  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0817 22:25:29.223655  254975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0817 22:25:29.223684  254975 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.223753  254975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0817 22:25:30.774685  254975 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550895846s)
	I0817 22:25:30.774722  254975 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0817 22:25:30.774776  254975 cache_images.go:92] LoadImages completed in 2.224475745s
	W0817 22:25:30.774942  254975 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0817 22:25:30.775051  254975 ssh_runner.go:195] Run: crio config
	I0817 22:25:30.840592  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:30.840623  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:30.840650  254975 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:30.840680  254975 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.56 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-294781 NodeName:old-k8s-version-294781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0817 22:25:30.840917  254975 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-294781"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-294781
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.56:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:30.841030  254975 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-294781 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:25:30.841111  254975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0817 22:25:30.850719  254975 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:30.850818  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:30.862807  254975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0817 22:25:30.882111  254975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:30.900496  254975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0817 22:25:30.921163  254975 ssh_runner.go:195] Run: grep 192.168.72.56	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:30.925789  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:30.941284  254975 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781 for IP: 192.168.72.56
	I0817 22:25:30.941335  254975 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:30.941556  254975 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:30.941617  254975 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:30.941728  254975 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/client.key
	I0817 22:25:30.941792  254975 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key.aa8f9bd0
	I0817 22:25:30.941827  254975 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key
	I0817 22:25:30.941948  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:30.941994  254975 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:30.942005  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:30.942039  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:30.942107  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:30.942141  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:30.942200  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:30.942953  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:30.973814  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:25:31.003939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:31.035137  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:25:31.063172  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:31.092059  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:31.120881  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:31.148113  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:31.175102  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:31.204939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:31.231548  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:31.263908  254975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:31.287143  254975 ssh_runner.go:195] Run: openssl version
	I0817 22:25:31.293380  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:31.307058  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313520  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313597  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.321182  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:31.332412  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:31.343318  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.348972  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.349044  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.355568  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:31.366257  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:31.376489  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382818  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382919  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.390171  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:31.400360  254975 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:31.406177  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:31.413881  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:31.422198  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:31.429468  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:31.437072  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:31.444150  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:31.450952  254975 kubeadm.go:404] StartCluster: {Name:old-k8s-version-294781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version
-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:31.451064  254975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:31.451140  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:31.489009  254975 cri.go:89] found id: ""
	I0817 22:25:31.489098  254975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:31.499098  254975 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:31.499126  254975 kubeadm.go:636] restartCluster start
	I0817 22:25:31.499191  254975 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:31.510909  254975 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.512049  254975 kubeconfig.go:92] found "old-k8s-version-294781" server: "https://192.168.72.56:8443"
	I0817 22:25:31.514634  254975 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:31.525968  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.526039  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.539397  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.539423  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.539485  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.552492  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:28.276789  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:30.406349  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:29.190524  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.195732  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.919929  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.415784  255491 pod_ready.go:92] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:32.415817  255491 pod_ready.go:81] duration metric: took 5.530013816s waiting for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:32.415840  255491 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:34.435177  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.435405  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.053512  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.053604  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.065409  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.553555  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.553647  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.566402  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.052703  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.052785  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.069027  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.552583  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.552724  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.566692  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.053418  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.053493  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.065794  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.553389  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.553490  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.566130  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.052663  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.052753  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.065276  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.553446  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.553544  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.567754  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.053326  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.053407  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.066562  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.553098  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.553200  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.564869  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.777224  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:35.273781  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.276847  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:33.690890  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.190746  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.435673  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.435712  255491 pod_ready.go:81] duration metric: took 5.019858859s waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.435724  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441582  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.441602  255491 pod_ready.go:81] duration metric: took 5.870633ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441614  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448615  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.448643  255491 pod_ready.go:81] duration metric: took 7.021551ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448656  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454742  255491 pod_ready.go:92] pod "kube-proxy-4d9dx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.454768  255491 pod_ready.go:81] duration metric: took 6.104572ms waiting for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454780  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462598  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.462623  255491 pod_ready.go:81] duration metric: took 7.834341ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462637  255491 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:39.741207  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.053213  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.053363  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.065752  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:37.553604  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.553709  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.569278  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.052848  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.052956  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.065011  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.552809  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.552915  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.564702  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.053287  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.053378  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.065004  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.553557  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.553654  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.565776  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.053269  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.053352  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.065089  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.552595  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.552718  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.564921  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.053531  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:41.053617  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:41.065803  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.526724  254975 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:41.526774  254975 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:41.526788  254975 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:41.526858  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:41.560831  254975 cri.go:89] found id: ""
	I0817 22:25:41.560931  254975 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:41.577926  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:41.587081  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:41.587169  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596656  254975 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596690  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:41.716908  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:39.776178  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.275946  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:38.193834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:40.691324  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.692667  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:41.745307  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:44.242440  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.243469  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.840419  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.123468828s)
	I0817 22:25:42.840454  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.062568  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.150374  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.265948  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:43.266043  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.284133  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.804512  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.304041  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.803961  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.828050  254975 api_server.go:72] duration metric: took 1.562100837s to wait for apiserver process to appear ...
	I0817 22:25:44.828085  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:44.828102  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.828570  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:44.828611  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.829005  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:45.329868  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.276477  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.775206  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:45.189460  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:47.690349  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:48.741121  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.742231  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.330553  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:50.330619  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.714219  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.714253  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:51.714268  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.756012  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.756052  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:49.276427  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.775567  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:49.698834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:52.190711  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.829442  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.888999  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:51.889031  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.329747  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.337398  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.337432  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.829817  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.839157  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.839187  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:53.329580  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:53.336858  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:25:53.347151  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:25:53.347191  254975 api_server.go:131] duration metric: took 8.519097199s to wait for apiserver health ...
	I0817 22:25:53.347204  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:53.347212  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:53.349243  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:52.743242  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:55.241261  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:53.350976  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:53.364808  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:53.397606  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:53.411868  254975 system_pods.go:59] 7 kube-system pods found
	I0817 22:25:53.411903  254975 system_pods.go:61] "coredns-5644d7b6d9-nz5d2" [5514f434-2c17-42dc-b35b-fef5bd6886fb] Running
	I0817 22:25:53.411909  254975 system_pods.go:61] "etcd-old-k8s-version-294781" [75919c29-02ae-46f6-8173-507b491d16da] Running
	I0817 22:25:53.411920  254975 system_pods.go:61] "kube-apiserver-old-k8s-version-294781" [f6d458ca-a84f-40dc-8b6a-b53fb8062c50] Running
	I0817 22:25:53.411930  254975 system_pods.go:61] "kube-controller-manager-old-k8s-version-294781" [0827f676-c11c-44b1-9bca-f8f905448490] Pending
	I0817 22:25:53.411937  254975 system_pods.go:61] "kube-proxy-f2bdh" [8b0dfe14-026a-44e1-9c6f-7f16fb61f90e] Running
	I0817 22:25:53.411943  254975 system_pods.go:61] "kube-scheduler-old-k8s-version-294781" [9ced2a30-44a8-421f-94ef-19be20b58c5d] Running
	I0817 22:25:53.411947  254975 system_pods.go:61] "storage-provisioner" [c9c05cca-5426-4071-a408-815c723a76f3] Running
	I0817 22:25:53.411954  254975 system_pods.go:74] duration metric: took 14.318728ms to wait for pod list to return data ...
	I0817 22:25:53.411961  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:53.415672  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:53.415715  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:53.415731  254975 node_conditions.go:105] duration metric: took 3.76549ms to run NodePressure ...
	I0817 22:25:53.415758  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:53.808911  254975 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:53.814276  254975 retry.go:31] will retry after 200.301174ms: kubelet not initialised
	I0817 22:25:54.020423  254975 retry.go:31] will retry after 376.047728ms: kubelet not initialised
	I0817 22:25:54.401967  254975 retry.go:31] will retry after 672.586884ms: kubelet not initialised
	I0817 22:25:55.079229  254975 retry.go:31] will retry after 1.101994757s: kubelet not initialised
	I0817 22:25:56.186236  254975 retry.go:31] will retry after 770.380926ms: kubelet not initialised
	I0817 22:25:53.777865  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.275799  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:54.690880  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.189416  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.242279  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.742604  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.961679  254975 retry.go:31] will retry after 2.235217601s: kubelet not initialised
	I0817 22:25:59.205012  254975 retry.go:31] will retry after 2.063266757s: kubelet not initialised
	I0817 22:26:01.275712  254975 retry.go:31] will retry after 5.105867057s: kubelet not initialised
	I0817 22:25:58.774815  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.275856  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.190180  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.692286  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.744707  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.240683  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.388158  254975 retry.go:31] will retry after 3.608427827s: kubelet not initialised
	I0817 22:26:03.775281  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.274839  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.190713  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.689980  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.742399  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.742739  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.004038  254975 retry.go:31] will retry after 8.940252852s: kubelet not initialised
	I0817 22:26:08.275499  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.275871  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.696436  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:11.189718  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.240363  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.241894  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:12.776238  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.274945  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.690119  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:16.189786  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:17.741982  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:20.242289  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.951040  254975 retry.go:31] will retry after 14.553103306s: kubelet not initialised
	I0817 22:26:17.774269  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:19.775075  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.274390  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.690720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:21.191013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.242355  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.742592  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.275310  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:26.774906  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:23.690032  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:25.690127  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.692342  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.243421  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:29.245714  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:28.777378  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.274134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:30.189730  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:32.689849  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.741791  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.240900  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:36.241988  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:33.521718  254975 kubeadm.go:787] kubelet initialised
	I0817 22:26:33.521745  254975 kubeadm.go:788] duration metric: took 39.712803989s waiting for restarted kubelet to initialise ...
	I0817 22:26:33.521755  254975 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:26:33.535522  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545447  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.545474  254975 pod_ready.go:81] duration metric: took 9.918514ms waiting for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545487  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551823  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.551853  254975 pod_ready.go:81] duration metric: took 6.357251ms waiting for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551867  254975 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559246  254975 pod_ready.go:92] pod "etcd-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.559278  254975 pod_ready.go:81] duration metric: took 7.402957ms waiting for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559291  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565344  254975 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.565373  254975 pod_ready.go:81] duration metric: took 6.072723ms waiting for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565387  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909036  254975 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.909073  254975 pod_ready.go:81] duration metric: took 343.677116ms waiting for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909089  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308592  254975 pod_ready.go:92] pod "kube-proxy-f2bdh" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.308619  254975 pod_ready.go:81] duration metric: took 399.522419ms waiting for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308630  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708489  254975 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.708517  254975 pod_ready.go:81] duration metric: took 399.879822ms waiting for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708528  254975 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.275646  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:35.774730  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.692013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.191914  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.242929  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.741450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.516268  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.275712  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.774133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.690461  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:41.690828  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.242204  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.741216  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:42.016209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.516019  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.275668  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.776837  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.189846  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:46.691439  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.742285  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.241123  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.016817  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.517406  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:48.276244  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.774977  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.189105  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:51.190270  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.241800  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.739978  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.016631  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.515565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.516890  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.274258  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.278000  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.192619  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.693990  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.742737  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.241115  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.241654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.015461  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.017347  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:57.775264  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.775399  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.776382  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:58.190121  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:00.190792  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:02.697428  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.741654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.742940  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.516565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.516966  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:04.275212  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:06.277355  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.190366  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:07.190973  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.244485  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.741985  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.015202  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.016691  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.774384  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.774729  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:09.692011  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.190853  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.742313  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:15.241577  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.514881  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.516950  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.517383  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.774867  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.775482  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.274793  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.689813  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.692012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.243159  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.517518  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.016576  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.275829  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.276653  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.692315  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.189564  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:22.240740  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:24.241960  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.242201  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.017348  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.515756  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.775957  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.275937  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.189646  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.690338  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.690947  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.741912  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.742165  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.516071  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.517838  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.276630  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.775134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.691012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:31.696187  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:33.241142  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:35.243536  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.017452  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.515974  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.516450  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.775448  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.775822  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.274968  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.188369  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.188928  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.741436  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.741983  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.015982  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.516526  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.278879  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.774782  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:38.189378  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:40.695851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:42.240995  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.741178  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.015737  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.018254  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.776276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.276133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.188678  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:45.189618  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:47.191825  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.741669  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.241194  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.242571  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.516687  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.016735  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.277486  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:50.775420  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.689852  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.691216  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.741209  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.743232  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.518209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.016075  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.275443  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.774204  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.692276  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.190072  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.242009  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:00.242183  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.516449  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.016290  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:57.775327  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:59.775642  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.275827  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.691467  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.189998  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.740875  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.742481  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.523305  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.016025  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.275917  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.777604  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.190940  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:05.690559  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.693124  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.241721  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.241889  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:08.017490  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.018815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.274176  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.275009  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.190851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.689465  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.741056  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.241846  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:16.243898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.516550  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.017547  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:13.276368  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.773960  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.690587  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.189824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:18.742657  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.243561  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.515978  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:20.016035  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.774474  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.776240  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.275209  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.194335  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.691142  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:23.743251  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.241450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.021055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.516645  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.776861  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.274029  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.189740  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.691801  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:28.242364  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:30.740610  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.016851  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.017289  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.517096  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.774126  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.275287  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.189744  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.691190  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.741643  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:35.242108  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.015792  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.016247  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.773849  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.777072  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:33.692774  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.189115  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:37.741756  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.244685  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.016815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.017616  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:39.276756  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:41.774190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.190001  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.690824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.742547  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.241354  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.518073  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.016560  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.776627  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:46.275092  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.189166  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.692178  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.697772  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.242829  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.741555  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.516429  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.516588  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:48.775347  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:51.274069  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:50.191415  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.694362  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.242367  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.742705  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.019113  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.516748  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:53.275190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.773511  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.189720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.189811  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.241152  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.242170  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.015866  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.016464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.515901  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.776667  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:00.273941  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.190719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.190988  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.741107  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.742524  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.243093  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.516444  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.017964  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:02.775583  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.280071  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.690586  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.643882  255057 pod_ready.go:81] duration metric: took 4m0.000182343s waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:05.643921  255057 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:05.643932  255057 pod_ready.go:38] duration metric: took 4m2.754707603s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:05.643956  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:29:05.643998  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:05.644060  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:05.703194  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:05.703221  255057 cri.go:89] found id: ""
	I0817 22:29:05.703229  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:05.703283  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.708602  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:05.708676  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:05.747581  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:05.747610  255057 cri.go:89] found id: ""
	I0817 22:29:05.747619  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:05.747692  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.753231  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:05.753331  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:05.795460  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:05.795489  255057 cri.go:89] found id: ""
	I0817 22:29:05.795499  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:05.795562  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.801181  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:05.801268  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:05.840433  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:05.840463  255057 cri.go:89] found id: ""
	I0817 22:29:05.840472  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:05.840546  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.845974  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:05.846039  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:05.886216  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:05.886243  255057 cri.go:89] found id: ""
	I0817 22:29:05.886252  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:05.886314  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.891204  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:05.891286  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:05.927636  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:05.927661  255057 cri.go:89] found id: ""
	I0817 22:29:05.927669  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:05.927732  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.932173  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:05.932230  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:05.963603  255057 cri.go:89] found id: ""
	I0817 22:29:05.963634  255057 logs.go:284] 0 containers: []
	W0817 22:29:05.963646  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:05.963654  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:05.963727  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:05.996465  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:05.996489  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:05.996496  255057 cri.go:89] found id: ""
	I0817 22:29:05.996505  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:05.996572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.001291  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.006314  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:06.006348  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:06.051348  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:06.051386  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:06.226315  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:06.226362  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:06.263289  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:06.263321  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:06.308223  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:06.308262  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:06.346964  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:06.347001  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:06.382834  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:06.382878  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:06.431491  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:06.431527  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:06.485901  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:06.485948  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:07.054256  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:07.054315  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:07.093229  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093417  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093570  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093737  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.119377  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:07.119420  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:07.137712  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:07.137756  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:07.187463  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:07.187511  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:07.252728  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252775  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:07.252844  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:07.252856  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252865  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252872  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252878  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.252884  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252890  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:08.741270  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:11.245029  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:08.516388  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:10.518542  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:07.775391  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:09.775841  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:12.276748  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.741788  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:16.242264  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.018983  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:15.516221  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.774832  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.967926  255215 pod_ready.go:81] duration metric: took 4m0.000797383s waiting for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:14.967968  255215 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:14.967995  255215 pod_ready.go:38] duration metric: took 4m12.638851973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:14.968025  255215 kubeadm.go:640] restartCluster took 4m34.07416066s
	W0817 22:29:14.968112  255215 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:14.968150  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:17.254245  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:29:17.278452  255057 api_server.go:72] duration metric: took 4m21.775005609s to wait for apiserver process to appear ...
	I0817 22:29:17.278488  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:29:17.278540  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:17.278675  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:17.317529  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:17.317554  255057 cri.go:89] found id: ""
	I0817 22:29:17.317562  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:17.317626  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.323505  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:17.323593  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:17.367258  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.367282  255057 cri.go:89] found id: ""
	I0817 22:29:17.367290  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:17.367355  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.372332  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:17.372424  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:17.406884  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:17.406914  255057 cri.go:89] found id: ""
	I0817 22:29:17.406923  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:17.406990  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.411562  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:17.411626  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:17.452516  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.452549  255057 cri.go:89] found id: ""
	I0817 22:29:17.452560  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:17.452654  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.458237  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:17.458327  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:17.498524  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:17.498550  255057 cri.go:89] found id: ""
	I0817 22:29:17.498559  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:17.498621  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.504941  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:17.505024  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:17.543542  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.543570  255057 cri.go:89] found id: ""
	I0817 22:29:17.543580  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:17.543646  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.548420  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:17.548488  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:17.589411  255057 cri.go:89] found id: ""
	I0817 22:29:17.589441  255057 logs.go:284] 0 containers: []
	W0817 22:29:17.589449  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:17.589455  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:17.589520  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:17.624044  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:17.624075  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.624083  255057 cri.go:89] found id: ""
	I0817 22:29:17.624092  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:17.624160  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.631040  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.635336  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:17.635359  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:17.688966  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689294  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689576  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689899  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:17.729861  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:17.729923  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:17.746619  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:17.746663  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.805149  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:17.805198  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.842639  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:17.842673  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.905357  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:17.905406  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.943860  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:17.943893  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:18.242331  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:20.742262  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:17.517585  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:19.519464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:18.114000  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:18.114038  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:18.176549  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:18.176602  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:18.211903  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:18.211947  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:18.246566  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:18.246600  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:18.280810  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:18.280853  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:18.831902  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:18.831957  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:18.883170  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883219  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:18.883304  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:18.883323  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883336  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883352  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883364  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:18.883382  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883391  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:23.242587  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:25.742126  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:22.017269  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:24.017806  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:26.516458  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.241489  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:30.741723  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.516703  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:31.016367  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.884252  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:29:28.889957  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:29:28.891532  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:29:28.891560  255057 api_server.go:131] duration metric: took 11.613062869s to wait for apiserver health ...
	I0817 22:29:28.891571  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:29:28.891602  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:28.891669  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:28.927462  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:28.927496  255057 cri.go:89] found id: ""
	I0817 22:29:28.927506  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:28.927572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.932195  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:28.932284  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:28.974041  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:28.974092  255057 cri.go:89] found id: ""
	I0817 22:29:28.974103  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:28.974172  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.978230  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:28.978302  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:29.012431  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.012459  255057 cri.go:89] found id: ""
	I0817 22:29:29.012469  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:29.012539  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.017232  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:29.017311  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:29.051208  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.051235  255057 cri.go:89] found id: ""
	I0817 22:29:29.051242  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:29.051292  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.056125  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:29.056193  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:29.094165  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.094196  255057 cri.go:89] found id: ""
	I0817 22:29:29.094207  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:29.094277  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.098992  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:29.099054  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:29.138522  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.138552  255057 cri.go:89] found id: ""
	I0817 22:29:29.138561  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:29.138614  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.143075  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:29.143159  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:29.177797  255057 cri.go:89] found id: ""
	I0817 22:29:29.177831  255057 logs.go:284] 0 containers: []
	W0817 22:29:29.177842  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:29.177850  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:29.177916  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:29.208897  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.208922  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.208928  255057 cri.go:89] found id: ""
	I0817 22:29:29.208937  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:29.209008  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.213083  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.217020  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:29.217043  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:29.253559  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253779  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253989  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.254225  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:29.280705  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:29.280746  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:29.295400  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:29.295432  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:29.344222  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:29.344268  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:29.482768  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:29.482812  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:29.541274  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:29.541317  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.577842  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:29.577876  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.613556  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:29.613595  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.654840  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:29.654886  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.711929  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:29.711974  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.749746  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:29.749802  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.782899  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:29.782932  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:30.286425  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:30.286488  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:30.328588  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328616  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:30.328686  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:30.328701  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328715  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328729  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328745  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:30.328754  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328762  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:32.741952  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.241640  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:33.516723  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.516887  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.339646  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:29:40.339676  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.339681  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.339685  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.339690  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.339694  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.339698  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.339705  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.339711  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.339722  255057 system_pods.go:74] duration metric: took 11.448139171s to wait for pod list to return data ...
	I0817 22:29:40.339730  255057 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:29:40.344246  255057 default_sa.go:45] found service account: "default"
	I0817 22:29:40.344271  255057 default_sa.go:55] duration metric: took 4.534553ms for default service account to be created ...
	I0817 22:29:40.344280  255057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:29:40.353485  255057 system_pods.go:86] 8 kube-system pods found
	I0817 22:29:40.353521  255057 system_pods.go:89] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.353529  255057 system_pods.go:89] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.353537  255057 system_pods.go:89] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.353546  255057 system_pods.go:89] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.353553  255057 system_pods.go:89] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.353560  255057 system_pods.go:89] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.353579  255057 system_pods.go:89] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.353589  255057 system_pods.go:89] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.353598  255057 system_pods.go:126] duration metric: took 9.313259ms to wait for k8s-apps to be running ...
	I0817 22:29:40.353612  255057 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:29:40.353685  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:40.376714  255057 system_svc.go:56] duration metric: took 23.088082ms WaitForService to wait for kubelet.
	I0817 22:29:40.376759  255057 kubeadm.go:581] duration metric: took 4m44.873323742s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:29:40.377191  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:29:40.385016  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:29:40.385043  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:29:40.385055  255057 node_conditions.go:105] duration metric: took 7.857619ms to run NodePressure ...
	I0817 22:29:40.385068  255057 start.go:228] waiting for startup goroutines ...
	I0817 22:29:40.385074  255057 start.go:233] waiting for cluster config update ...
	I0817 22:29:40.385085  255057 start.go:242] writing updated cluster config ...
	I0817 22:29:40.385411  255057 ssh_runner.go:195] Run: rm -f paused
	I0817 22:29:40.457420  255057 start.go:600] kubectl: 1.28.0, cluster: 1.28.0-rc.1 (minor skew: 0)
	I0817 22:29:40.460043  255057 out.go:177] * Done! kubectl is now configured to use "no-preload-525875" cluster and "default" namespace by default
	I0817 22:29:37.242898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:37.462917  255491 pod_ready.go:81] duration metric: took 4m0.00026087s waiting for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:37.462956  255491 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:37.463009  255491 pod_ready.go:38] duration metric: took 4m10.583985022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:37.463050  255491 kubeadm.go:640] restartCluster took 4m32.042723788s
	W0817 22:29:37.463141  255491 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:37.463185  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:37.517852  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.016790  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:42.517001  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:45.016757  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:47.291163  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.322979002s)
	I0817 22:29:47.291246  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:47.305948  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:29:47.316036  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:29:47.325470  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:29:47.325519  255215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:29:47.566297  255215 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:29:47.017112  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:49.017246  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:51.018095  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:53.519020  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:56.016627  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.087786  255215 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:29:59.087860  255215 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:29:59.087991  255215 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:29:59.088169  255215 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:29:59.088306  255215 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:29:59.088388  255215 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:29:59.090358  255215 out.go:204]   - Generating certificates and keys ...
	I0817 22:29:59.090460  255215 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:29:59.090547  255215 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:29:59.090660  255215 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:29:59.090766  255215 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:29:59.090886  255215 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:29:59.090976  255215 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:29:59.091060  255215 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:29:59.091152  255215 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:29:59.091250  255215 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:29:59.091350  255215 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:29:59.091435  255215 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:29:59.091514  255215 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:29:59.091589  255215 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:29:59.091655  255215 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:29:59.091759  255215 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:29:59.091836  255215 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:29:59.091960  255215 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:29:59.092070  255215 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:29:59.092127  255215 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:29:59.092207  255215 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:29:59.094268  255215 out.go:204]   - Booting up control plane ...
	I0817 22:29:59.094408  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:29:59.094513  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:29:59.094594  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:29:59.094719  255215 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:29:59.094944  255215 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:29:59.095031  255215 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504676 seconds
	I0817 22:29:59.095206  255215 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:29:59.095401  255215 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:29:59.095494  255215 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:29:59.095757  255215 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-437183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:29:59.095844  255215 kubeadm.go:322] [bootstrap-token] Using token: 0fftkt.nm31ryo8p4990tdr
	I0817 22:29:59.097581  255215 out.go:204]   - Configuring RBAC rules ...
	I0817 22:29:59.097750  255215 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:29:59.097884  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:29:59.098097  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:29:59.098258  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:29:59.098405  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:29:59.098510  255215 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:29:59.098679  255215 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:29:59.098745  255215 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:29:59.098802  255215 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:29:59.098811  255215 kubeadm.go:322] 
	I0817 22:29:59.098889  255215 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:29:59.098898  255215 kubeadm.go:322] 
	I0817 22:29:59.099010  255215 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:29:59.099033  255215 kubeadm.go:322] 
	I0817 22:29:59.099069  255215 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:29:59.099142  255215 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:29:59.099221  255215 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:29:59.099232  255215 kubeadm.go:322] 
	I0817 22:29:59.099297  255215 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:29:59.099307  255215 kubeadm.go:322] 
	I0817 22:29:59.099365  255215 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:29:59.099374  255215 kubeadm.go:322] 
	I0817 22:29:59.099446  255215 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:29:59.099552  255215 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:29:59.099660  255215 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:29:59.099670  255215 kubeadm.go:322] 
	I0817 22:29:59.099799  255215 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:29:59.099909  255215 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:29:59.099917  255215 kubeadm.go:322] 
	I0817 22:29:59.100037  255215 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100173  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:29:59.100205  255215 kubeadm.go:322] 	--control-plane 
	I0817 22:29:59.100218  255215 kubeadm.go:322] 
	I0817 22:29:59.100348  255215 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:29:59.100359  255215 kubeadm.go:322] 
	I0817 22:29:59.100472  255215 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100610  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:29:59.100639  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:29:59.100650  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:29:59.102534  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:29:58.017949  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:00.519619  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.104107  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:29:59.128756  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:29:59.172002  255215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=embed-certs-437183 minikube.k8s.io/updated_at=2023_08_17T22_29_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.717974  255215 ops.go:34] apiserver oom_adj: -16
	I0817 22:29:59.718154  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.815994  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.419198  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.919196  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.419096  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.919517  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:02.419076  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.017120  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:05.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:02.919289  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.419268  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.919021  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.418663  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.919015  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.419325  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.919309  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.418701  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.919301  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.418670  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.919445  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.419363  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.918988  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.418788  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.918948  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.418731  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.919293  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.419374  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.578800  255215 kubeadm.go:1081] duration metric: took 12.40679081s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:11.578850  255215 kubeadm.go:406] StartCluster complete in 5m30.729798213s
	I0817 22:30:11.578877  255215 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.578990  255215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:11.581741  255215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.582107  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:11.582305  255215 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:11.582414  255215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-437183"
	I0817 22:30:11.582435  255215 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-437183"
	I0817 22:30:11.582433  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:11.582436  255215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-437183"
	I0817 22:30:11.582449  255215 addons.go:69] Setting metrics-server=true in profile "embed-certs-437183"
	I0817 22:30:11.582461  255215 addons.go:231] Setting addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:11.582465  255215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-437183"
	W0817 22:30:11.582467  255215 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:11.582521  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	W0817 22:30:11.582443  255215 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:11.582609  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.582956  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582976  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582992  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583000  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583326  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.583361  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.600606  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0817 22:30:11.601162  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.601890  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.601918  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.602386  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.603044  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.603110  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.603922  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0817 22:30:11.604193  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36125
	I0817 22:30:11.604476  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.604711  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.605320  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605342  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605474  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605500  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605874  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.605927  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.606184  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.606616  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.606654  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.622026  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0817 22:30:11.622822  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.623522  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.623556  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.624021  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.624332  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.626478  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.629171  255215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:11.627845  255215 addons.go:231] Setting addon default-storageclass=true in "embed-certs-437183"
	W0817 22:30:11.629212  255215 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:11.629267  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.628437  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0817 22:30:11.629683  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.631294  255215 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.631295  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.629905  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.631315  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:11.631339  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.632333  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.632356  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.632860  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.633085  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.635520  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.635727  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.638116  255215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:09.776936  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.313725935s)
	I0817 22:30:09.777008  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:09.794808  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:09.806086  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:09.818495  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:09.818547  255491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:30:10.061316  255491 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:30:11.636353  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.636644  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.640483  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.640486  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:11.640508  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:11.640535  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.640703  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.640905  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.641073  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.645685  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646351  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.646376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646867  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.647096  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.647286  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.647444  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.655819  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0817 22:30:11.656540  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.657308  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.657326  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.657864  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.658485  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.658520  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.679610  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0817 22:30:11.680268  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.680977  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.681013  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.681485  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.681722  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.683711  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.686274  255215 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.686297  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:11.686323  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.692154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.692160  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692245  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.692288  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692447  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.692691  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.692899  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.742259  255215 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-437183" context rescaled to 1 replicas
	I0817 22:30:11.742317  255215 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:11.744647  255215 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:07.516999  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:10.016647  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:11.746674  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:11.833127  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.853282  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:11.853316  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:11.858219  255215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.858353  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:11.889330  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.896554  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:11.896595  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:11.906260  255215 node_ready.go:49] node "embed-certs-437183" has status "Ready":"True"
	I0817 22:30:11.906292  255215 node_ready.go:38] duration metric: took 48.027482ms waiting for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.906305  255215 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:11.949379  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:11.949409  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:12.023543  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:12.131426  255215 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:14.420517  255215 pod_ready.go:102] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.647805  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.814629092s)
	I0817 22:30:14.647842  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.78945104s)
	I0817 22:30:14.647874  255215 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:14.647904  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.758517925s)
	I0817 22:30:14.647915  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648017  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648042  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648067  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648478  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.648532  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.648626  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.648638  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648656  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648882  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.649025  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.649050  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.649069  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.650529  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.650577  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.650586  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.650600  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.650614  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.651171  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.651230  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.652509  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652529  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.652688  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652708  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.175766  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.152137099s)
	I0817 22:30:15.175888  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.175915  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176344  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.176343  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.176428  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.176452  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.176488  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176915  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.178804  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.178827  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.178840  255215 addons.go:467] Verifying addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:15.180928  255215 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:30:12.018605  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.519226  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:15.182515  255215 addons.go:502] enable addons completed in 3.600222172s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:30:16.920634  255215 pod_ready.go:92] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.920664  255215 pod_ready.go:81] duration metric: took 4.789200515s waiting for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.920674  255215 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937440  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.937469  255215 pod_ready.go:81] duration metric: took 16.789093ms waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937483  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944411  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.944437  255215 pod_ready.go:81] duration metric: took 6.944986ms waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944451  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952239  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.952267  255215 pod_ready.go:81] duration metric: took 7.807798ms waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952281  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815597  255215 pod_ready.go:92] pod "kube-proxy-2f6jz" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:17.815630  255215 pod_ready.go:81] duration metric: took 863.340907ms waiting for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815644  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108648  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:18.108683  255215 pod_ready.go:81] duration metric: took 293.029473ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108693  255215 pod_ready.go:38] duration metric: took 6.202373203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:18.108726  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:18.108789  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:18.129379  255215 api_server.go:72] duration metric: took 6.38701969s to wait for apiserver process to appear ...
	I0817 22:30:18.129409  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:18.129425  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:30:18.138226  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:30:18.141542  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:18.141568  255215 api_server.go:131] duration metric: took 12.152138ms to wait for apiserver health ...
	I0817 22:30:18.141579  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:18.312736  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:30:18.312782  255215 system_pods.go:61] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.312790  255215 system_pods.go:61] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.312798  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.312804  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.312811  255215 system_pods.go:61] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.312817  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.312831  255215 system_pods.go:61] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.312841  255215 system_pods.go:61] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.312855  255215 system_pods.go:74] duration metric: took 171.269837ms to wait for pod list to return data ...
	I0817 22:30:18.312868  255215 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:18.511271  255215 default_sa.go:45] found service account: "default"
	I0817 22:30:18.511380  255215 default_sa.go:55] duration metric: took 198.492073ms for default service account to be created ...
	I0817 22:30:18.511401  255215 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:18.710880  255215 system_pods.go:86] 8 kube-system pods found
	I0817 22:30:18.710911  255215 system_pods.go:89] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.710917  255215 system_pods.go:89] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.710921  255215 system_pods.go:89] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.710926  255215 system_pods.go:89] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.710929  255215 system_pods.go:89] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.710933  255215 system_pods.go:89] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.710943  255215 system_pods.go:89] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.710949  255215 system_pods.go:89] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.710958  255215 system_pods.go:126] duration metric: took 199.549571ms to wait for k8s-apps to be running ...
	I0817 22:30:18.710967  255215 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:18.711013  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:18.725788  255215 system_svc.go:56] duration metric: took 14.807351ms WaitForService to wait for kubelet.
	I0817 22:30:18.725819  255215 kubeadm.go:581] duration metric: took 6.983465617s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:18.725846  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:18.908038  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:18.908079  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:18.908093  255215 node_conditions.go:105] duration metric: took 182.240177ms to run NodePressure ...
	I0817 22:30:18.908108  255215 start.go:228] waiting for startup goroutines ...
	I0817 22:30:18.908127  255215 start.go:233] waiting for cluster config update ...
	I0817 22:30:18.908142  255215 start.go:242] writing updated cluster config ...
	I0817 22:30:18.908536  255215 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:18.962718  255215 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:18.965052  255215 out.go:177] * Done! kubectl is now configured to use "embed-certs-437183" cluster and "default" namespace by default
	I0817 22:30:17.018314  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:19.517055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:21.517216  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:22.302082  255491 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:30:22.302198  255491 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:22.302316  255491 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:22.302392  255491 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:22.302537  255491 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:22.302623  255491 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:22.304947  255491 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:22.305043  255491 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:22.305112  255491 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:22.305227  255491 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:22.305295  255491 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:22.305389  255491 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:22.305466  255491 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:22.305540  255491 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:22.305614  255491 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:22.305703  255491 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:22.305801  255491 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:22.305861  255491 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:22.305956  255491 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:22.306043  255491 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:22.306141  255491 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:22.306231  255491 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:22.306313  255491 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:22.306462  255491 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:22.306597  255491 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:22.306674  255491 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:30:22.306778  255491 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:22.308372  255491 out.go:204]   - Booting up control plane ...
	I0817 22:30:22.308478  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:22.308565  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:22.308644  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:22.308735  255491 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:22.308942  255491 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:22.309046  255491 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003655 seconds
	I0817 22:30:22.309195  255491 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:22.309352  255491 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:22.309430  255491 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:22.309656  255491 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-321287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:30:22.309729  255491 kubeadm.go:322] [bootstrap-token] Using token: vtugjh.yrdml71jezyixk01
	I0817 22:30:22.311499  255491 out.go:204]   - Configuring RBAC rules ...
	I0817 22:30:22.311610  255491 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:30:22.311706  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:30:22.311887  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:30:22.312069  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:30:22.312240  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:30:22.312338  255491 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:30:22.312462  255491 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:30:22.312516  255491 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:30:22.312583  255491 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:30:22.312595  255491 kubeadm.go:322] 
	I0817 22:30:22.312680  255491 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:30:22.312693  255491 kubeadm.go:322] 
	I0817 22:30:22.312798  255491 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:30:22.312806  255491 kubeadm.go:322] 
	I0817 22:30:22.312847  255491 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:30:22.312926  255491 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:30:22.313008  255491 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:30:22.313016  255491 kubeadm.go:322] 
	I0817 22:30:22.313073  255491 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:30:22.313092  255491 kubeadm.go:322] 
	I0817 22:30:22.313135  255491 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:30:22.313141  255491 kubeadm.go:322] 
	I0817 22:30:22.313180  255491 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:30:22.313271  255491 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:30:22.313397  255491 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:30:22.313421  255491 kubeadm.go:322] 
	I0817 22:30:22.313561  255491 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:30:22.313670  255491 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:30:22.313691  255491 kubeadm.go:322] 
	I0817 22:30:22.313790  255491 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.313910  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:30:22.313930  255491 kubeadm.go:322] 	--control-plane 
	I0817 22:30:22.313933  255491 kubeadm.go:322] 
	I0817 22:30:22.314017  255491 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:30:22.314029  255491 kubeadm.go:322] 
	I0817 22:30:22.314161  255491 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.314324  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:30:22.314342  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:30:22.314352  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:30:22.316092  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:30:22.317823  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:30:22.330216  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:30:22.364427  255491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:30:22.364530  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.364541  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=default-k8s-diff-port-321287 minikube.k8s.io/updated_at=2023_08_17T22_30_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.398800  255491 ops.go:34] apiserver oom_adj: -16
	I0817 22:30:22.789239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.908906  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.507279  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.007071  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.507204  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.007980  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.507764  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.007834  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.507449  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.518185  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:26.017066  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:27.007162  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:27.507978  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.008024  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.507376  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.007583  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.507355  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.007416  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.507014  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.007539  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.507116  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.516778  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:31.016979  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:32.007363  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:32.508019  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.007624  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.507337  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.007239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.507255  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.007804  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.507323  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.647403  255491 kubeadm.go:1081] duration metric: took 13.282950211s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:35.647439  255491 kubeadm.go:406] StartCluster complete in 5m30.275148595s
	I0817 22:30:35.647465  255491 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.647562  255491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:35.649294  255491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.649625  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:35.649672  255491 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:35.649793  255491 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649815  255491 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.649827  255491 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:35.649857  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:35.649897  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.649914  255491 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649931  255491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-321287"
	I0817 22:30:35.650130  255491 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.650154  255491 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.650163  255491 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:35.650207  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.650360  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650362  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650397  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650456  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650616  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650660  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.666863  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0817 22:30:35.666883  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0817 22:30:35.667444  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.667657  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.668085  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668105  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668245  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668256  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668780  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.669523  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.669553  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.670006  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:30:35.670382  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.670448  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.670513  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.670985  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.671005  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.671824  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.672870  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.672905  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.682146  255491 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.682167  255491 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:35.682200  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.682640  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.682674  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.690436  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0817 22:30:35.691039  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.691642  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.691666  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.692056  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.692328  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.692416  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0817 22:30:35.693048  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.693566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.693588  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.693974  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.694180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.694314  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.696623  255491 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:35.696015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.698535  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:35.698555  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:35.698593  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.700284  255491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:35.702071  255491 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.702097  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:35.702127  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.703050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.703111  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.703161  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703297  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.703498  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.703605  255491 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-321287" context rescaled to 1 replicas
	I0817 22:30:35.703641  255491 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:35.706989  255491 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:35.703707  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.707227  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.707832  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40363
	I0817 22:30:35.708116  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.709223  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:35.709358  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.709408  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.709426  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.709650  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.709767  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.709979  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.710587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.710608  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.711008  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.711578  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.711631  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.730317  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35051
	I0817 22:30:35.730875  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.731566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.731595  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.731993  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.732228  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.734475  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.734778  255491 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.734799  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:35.734822  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.737878  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.738359  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738478  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.739396  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.739599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.739850  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.902960  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.913205  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.936947  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:35.936977  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:35.977717  255491 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.977876  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:35.984231  255491 node_ready.go:49] node "default-k8s-diff-port-321287" has status "Ready":"True"
	I0817 22:30:35.984286  255491 node_ready.go:38] duration metric: took 6.524258ms waiting for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.984302  255491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:36.008884  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:36.008915  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:36.010024  255491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.073572  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.073607  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:36.139665  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.382827  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.382863  255491 pod_ready.go:81] duration metric: took 372.809939ms waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.382878  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513607  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.513640  255491 pod_ready.go:81] duration metric: took 130.752675ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513653  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610942  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.610974  255491 pod_ready.go:81] duration metric: took 97.312774ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610989  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:33.017198  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:34.709633  254975 pod_ready.go:81] duration metric: took 4m0.001081095s waiting for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	E0817 22:30:34.709679  254975 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:30:34.709709  254975 pod_ready.go:38] duration metric: took 4m1.187941338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:34.709762  254975 kubeadm.go:640] restartCluster took 5m3.210628062s
	W0817 22:30:34.709854  254975 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:30:34.709895  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:30:38.629738  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.716488882s)
	I0817 22:30:38.629799  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.651889874s)
	I0817 22:30:38.629829  255491 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:38.629802  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629871  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.629753  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.726738359s)
	I0817 22:30:38.629944  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629971  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630368  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630389  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630401  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630429  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630528  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630559  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630578  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630587  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630677  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.630707  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630732  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630973  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630991  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.631004  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.631007  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.631015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.632993  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.633019  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.633033  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.758987  255491 pod_ready.go:102] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:39.084274  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.944554423s)
	I0817 22:30:39.084336  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.084785  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.084799  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:39.084817  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.084829  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084842  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.085152  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.085168  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.085179  255491 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-321287"
	I0817 22:30:39.087296  255491 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:30:39.089202  255491 addons.go:502] enable addons completed in 3.439530445s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:30:41.238328  255491 pod_ready.go:92] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.238358  255491 pod_ready.go:81] duration metric: took 4.627360634s waiting for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.238376  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.244985  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.245011  255491 pod_ready.go:81] duration metric: took 6.626883ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.245022  255491 pod_ready.go:38] duration metric: took 5.260700173s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:41.245042  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:41.245097  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:41.262899  255491 api_server.go:72] duration metric: took 5.559222986s to wait for apiserver process to appear ...
	I0817 22:30:41.262935  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:41.262957  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:30:41.268642  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:30:41.269921  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:41.269947  255491 api_server.go:131] duration metric: took 7.005146ms to wait for apiserver health ...
	I0817 22:30:41.269955  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:41.276807  255491 system_pods.go:59] 9 kube-system pods found
	I0817 22:30:41.276844  255491 system_pods.go:61] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.276855  255491 system_pods.go:61] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.276863  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.276868  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.276875  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.276883  255491 system_pods.go:61] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.276890  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.276908  255491 system_pods.go:61] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.276918  255491 system_pods.go:61] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.276929  255491 system_pods.go:74] duration metric: took 6.967523ms to wait for pod list to return data ...
	I0817 22:30:41.276941  255491 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:41.279696  255491 default_sa.go:45] found service account: "default"
	I0817 22:30:41.279724  255491 default_sa.go:55] duration metric: took 2.773544ms for default service account to be created ...
	I0817 22:30:41.279735  255491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:41.286220  255491 system_pods.go:86] 9 kube-system pods found
	I0817 22:30:41.286258  255491 system_pods.go:89] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.286269  255491 system_pods.go:89] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.286277  255491 system_pods.go:89] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.286283  255491 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.286287  255491 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.286292  255491 system_pods.go:89] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.286296  255491 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.286302  255491 system_pods.go:89] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.286306  255491 system_pods.go:89] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.286316  255491 system_pods.go:126] duration metric: took 6.576272ms to wait for k8s-apps to be running ...
	I0817 22:30:41.286326  255491 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:41.286373  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:41.301841  255491 system_svc.go:56] duration metric: took 15.499888ms WaitForService to wait for kubelet.
	I0817 22:30:41.301874  255491 kubeadm.go:581] duration metric: took 5.598205886s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:41.301898  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:41.306253  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:41.306289  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:41.306300  255491 node_conditions.go:105] duration metric: took 4.396496ms to run NodePressure ...
	I0817 22:30:41.306311  255491 start.go:228] waiting for startup goroutines ...
	I0817 22:30:41.306320  255491 start.go:233] waiting for cluster config update ...
	I0817 22:30:41.306329  255491 start.go:242] writing updated cluster config ...
	I0817 22:30:41.306617  255491 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:41.363947  255491 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:41.366167  255491 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-321287" cluster and "default" namespace by default
	I0817 22:30:47.861835  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.151914062s)
	I0817 22:30:47.861926  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:47.877704  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:47.888385  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:47.898212  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:47.898269  254975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0817 22:30:47.957871  254975 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0817 22:30:47.958020  254975 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:48.121563  254975 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:48.121724  254975 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:48.121869  254975 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:48.316212  254975 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:48.316379  254975 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:48.324040  254975 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0817 22:30:48.453946  254975 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:48.456278  254975 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:48.456403  254975 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:48.456486  254975 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:48.456629  254975 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:48.456723  254975 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:48.456831  254975 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:48.456916  254975 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:48.456992  254975 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:48.457084  254975 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:48.457233  254975 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:48.457347  254975 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:48.457400  254975 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:48.457478  254975 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:48.599977  254975 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:48.760474  254975 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:48.873066  254975 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:48.958450  254975 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:48.959335  254975 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:48.961565  254975 out.go:204]   - Booting up control plane ...
	I0817 22:30:48.961672  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:48.972854  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:48.974149  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:48.975110  254975 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:48.981334  254975 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:58.986028  254975 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004044 seconds
	I0817 22:30:58.986232  254975 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:59.005484  254975 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:59.530563  254975 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:59.530730  254975 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-294781 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0817 22:31:00.039739  254975 kubeadm.go:322] [bootstrap-token] Using token: y5v57w.cds9r5wk990e6rgq
	I0817 22:31:00.041700  254975 out.go:204]   - Configuring RBAC rules ...
	I0817 22:31:00.041831  254975 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:31:00.051302  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:31:00.056478  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:31:00.060403  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:31:00.065454  254975 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:31:00.155583  254975 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:31:00.472429  254975 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:31:00.474442  254975 kubeadm.go:322] 
	I0817 22:31:00.474512  254975 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:31:00.474554  254975 kubeadm.go:322] 
	I0817 22:31:00.474671  254975 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:31:00.474686  254975 kubeadm.go:322] 
	I0817 22:31:00.474708  254975 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:31:00.474808  254975 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:31:00.474883  254975 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:31:00.474895  254975 kubeadm.go:322] 
	I0817 22:31:00.474973  254975 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:31:00.475082  254975 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:31:00.475179  254975 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:31:00.475193  254975 kubeadm.go:322] 
	I0817 22:31:00.475308  254975 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0817 22:31:00.475421  254975 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:31:00.475431  254975 kubeadm.go:322] 
	I0817 22:31:00.475551  254975 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.475696  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:31:00.475750  254975 kubeadm.go:322]     --control-plane 	  
	I0817 22:31:00.475759  254975 kubeadm.go:322] 
	I0817 22:31:00.475881  254975 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:31:00.475937  254975 kubeadm.go:322] 
	I0817 22:31:00.476044  254975 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.476196  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:31:00.476725  254975 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:31:00.476766  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:31:00.476782  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:31:00.478932  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:31:00.480754  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:31:00.496449  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:31:00.527578  254975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:31:00.527658  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.527769  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=old-k8s-version-294781 minikube.k8s.io/updated_at=2023_08_17T22_31_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.809784  254975 ops.go:34] apiserver oom_adj: -16
	I0817 22:31:00.809925  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.991957  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:01.627311  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.126890  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.626673  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.127657  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.627284  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.127320  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.627026  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.127336  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.626721  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.127279  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.626697  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.127307  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.626920  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.127266  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.626970  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.126923  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.626808  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.127298  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.627182  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.126639  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.626681  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.127321  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.626904  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.127274  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.627272  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.127457  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.627280  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.127333  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.231130  254975 kubeadm.go:1081] duration metric: took 14.703542822s to wait for elevateKubeSystemPrivileges.
	I0817 22:31:15.231183  254975 kubeadm.go:406] StartCluster complete in 5m43.780243338s
	I0817 22:31:15.231254  254975 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.231391  254975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:31:15.233245  254975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.233533  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:31:15.233848  254975 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:31:15.233927  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:31:15.233947  254975 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-294781"
	I0817 22:31:15.233968  254975 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-294781"
	W0817 22:31:15.233977  254975 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:31:15.233983  254975 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234001  254975 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234007  254975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-294781"
	I0817 22:31:15.234021  254975 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-294781"
	W0817 22:31:15.234040  254975 addons.go:240] addon metrics-server should already be in state true
	I0817 22:31:15.234075  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234097  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234576  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234581  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234650  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.252847  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0817 22:31:15.252891  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38451
	I0817 22:31:15.253743  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.253833  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.254616  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254632  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.254713  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0817 22:31:15.254887  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254906  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.255216  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255276  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.255294  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255865  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255872  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255960  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.255977  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.256400  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.256604  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.269860  254975 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-294781"
	W0817 22:31:15.269883  254975 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:31:15.269911  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.270304  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.270335  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.273014  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0817 22:31:15.273532  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.274114  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.274134  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.274549  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.274769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.276415  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.276491  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0817 22:31:15.278935  254975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:31:15.277380  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.278041  254975 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-294781" context rescaled to 1 replicas
	I0817 22:31:15.280642  254975 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:31:15.282441  254975 out.go:177] * Verifying Kubernetes components...
	I0817 22:31:15.280856  254975 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.281832  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.284263  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.284347  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:31:15.284348  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:31:15.284366  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.285256  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.285580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.288289  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.288456  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.290643  254975 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:31:15.289601  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.289769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.292678  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:31:15.292693  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:31:15.292721  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.292776  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.293060  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.293277  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.293791  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.297193  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0817 22:31:15.297816  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.298486  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.298506  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.298962  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.299508  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.299531  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.300275  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.300994  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.301024  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.301098  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.301296  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.301502  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.301651  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.321283  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0817 22:31:15.321876  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.322943  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.322971  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.323496  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.323842  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.326563  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.326910  254975 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.326933  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:31:15.326957  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.330190  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.330947  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.330978  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.331193  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.331422  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.331552  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.331681  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.497277  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.529500  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.531359  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:31:15.531381  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:31:15.585477  254975 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.585494  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:31:15.590969  254975 node_ready.go:49] node "old-k8s-version-294781" has status "Ready":"True"
	I0817 22:31:15.591001  254975 node_ready.go:38] duration metric: took 5.470452ms waiting for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.591012  254975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:15.594026  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:31:15.594077  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:31:15.596784  254975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:15.638420  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:15.638455  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:31:15.707735  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:16.690916  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.193582768s)
	I0817 22:31:16.690987  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691002  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691002  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161462189s)
	I0817 22:31:16.691042  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105375097s)
	I0817 22:31:16.691044  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691217  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691158  254975 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0817 22:31:16.691422  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691464  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691490  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691561  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691512  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691586  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691603  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691630  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691813  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691832  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692047  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692086  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692110  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.692130  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.692114  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.692460  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692480  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828440  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.120652237s)
	I0817 22:31:16.828511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828525  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.828913  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.828939  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828952  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828963  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.829228  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.829252  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.829264  254975 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-294781"
	I0817 22:31:16.829279  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.831430  254975 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:31:16.834005  254975 addons.go:502] enable addons completed in 1.600151352s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:31:17.618673  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.110224  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.610989  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.611015  254975 pod_ready.go:81] duration metric: took 5.014205232s waiting for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.611025  254975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616618  254975 pod_ready.go:92] pod "kube-proxy-44jmp" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.616639  254975 pod_ready.go:81] duration metric: took 5.608097ms waiting for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616646  254975 pod_ready.go:38] duration metric: took 5.025620457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:20.616695  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:31:20.616748  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:31:20.633102  254975 api_server.go:72] duration metric: took 5.352419031s to wait for apiserver process to appear ...
	I0817 22:31:20.633131  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:31:20.633152  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:31:20.640585  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:31:20.641784  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:31:20.641807  254975 api_server.go:131] duration metric: took 8.66923ms to wait for apiserver health ...
	I0817 22:31:20.641815  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:31:20.647851  254975 system_pods.go:59] 4 kube-system pods found
	I0817 22:31:20.647904  254975 system_pods.go:61] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.647909  254975 system_pods.go:61] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.647917  254975 system_pods.go:61] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.647923  254975 system_pods.go:61] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.647929  254975 system_pods.go:74] duration metric: took 6.108947ms to wait for pod list to return data ...
	I0817 22:31:20.647937  254975 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:31:20.651451  254975 default_sa.go:45] found service account: "default"
	I0817 22:31:20.651485  254975 default_sa.go:55] duration metric: took 3.540013ms for default service account to be created ...
	I0817 22:31:20.651496  254975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:31:20.655529  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.655556  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.655561  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.655567  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.655575  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.655593  254975 retry.go:31] will retry after 194.203175ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:20.860033  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.860063  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.860069  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.860076  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.860082  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.860098  254975 retry.go:31] will retry after 273.217607ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.138457  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.138483  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.138488  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.138494  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.138501  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.138520  254975 retry.go:31] will retry after 311.999616ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.455473  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.455507  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.455513  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.455519  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.455526  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.455542  254975 retry.go:31] will retry after 462.378441ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.922656  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.922695  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.922703  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.922714  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.922724  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.922743  254975 retry.go:31] will retry after 595.850716ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:22.525024  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:22.525067  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:22.525076  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:22.525087  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:22.525100  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:22.525123  254975 retry.go:31] will retry after 916.880182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:23.446648  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:23.446678  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:23.446684  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:23.446691  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:23.446697  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:23.446717  254975 retry.go:31] will retry after 1.080769148s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:24.532239  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:24.532270  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:24.532277  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:24.532287  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:24.532296  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:24.532325  254975 retry.go:31] will retry after 1.261174641s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:25.798397  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:25.798430  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:25.798435  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:25.798442  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:25.798449  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:25.798465  254975 retry.go:31] will retry after 1.383083099s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:27.187782  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:27.187816  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:27.187821  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:27.187828  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:27.187834  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:27.187852  254975 retry.go:31] will retry after 1.954135672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:29.148294  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:29.148325  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:29.148330  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:29.148337  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:29.148344  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:29.148359  254975 retry.go:31] will retry after 2.632641562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:31.786946  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:31.786981  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:31.786988  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:31.786998  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:31.787010  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:31.787030  254975 retry.go:31] will retry after 3.626446493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:35.421023  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:35.421053  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:35.421059  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:35.421065  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:35.421072  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:35.421089  254975 retry.go:31] will retry after 2.800907689s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:38.228118  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:38.228155  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:38.228165  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:38.228177  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:38.228187  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:38.228204  254975 retry.go:31] will retry after 3.699626464s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:41.932868  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:41.932902  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:41.932908  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:41.932915  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:41.932922  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:41.932939  254975 retry.go:31] will retry after 6.965217948s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:48.913824  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:48.913866  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:48.913875  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:48.913899  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:48.913909  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:48.913931  254975 retry.go:31] will retry after 7.880328521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:56.800829  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:56.800868  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:56.800876  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:56.800887  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:56.800893  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:56.800915  254975 retry.go:31] will retry after 7.054585059s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:32:03.878268  254975 system_pods.go:86] 7 kube-system pods found
	I0817 22:32:03.878297  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:03.878304  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Pending
	I0817 22:32:03.878308  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Pending
	I0817 22:32:03.878311  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:03.878316  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:03.878324  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:03.878331  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:03.878351  254975 retry.go:31] will retry after 13.129481457s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0817 22:32:17.015570  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:17.015609  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:17.015619  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:17.015627  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:17.015634  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Pending
	I0817 22:32:17.015640  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:17.015647  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:17.015672  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:17.015682  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:17.015709  254975 retry.go:31] will retry after 15.332291563s: missing components: kube-controller-manager
	I0817 22:32:32.354549  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:32.354587  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:32.354596  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:32.354603  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:32.354613  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Running
	I0817 22:32:32.354619  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:32.354626  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:32.354637  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:32.354646  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:32.354657  254975 system_pods.go:126] duration metric: took 1m11.703154434s to wait for k8s-apps to be running ...
	I0817 22:32:32.354700  254975 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:32:32.354766  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:32:32.372492  254975 system_svc.go:56] duration metric: took 17.765249ms WaitForService to wait for kubelet.
	I0817 22:32:32.372541  254975 kubeadm.go:581] duration metric: took 1m17.091866023s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:32:32.372573  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:32:32.377413  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:32:32.377442  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:32:32.377455  254975 node_conditions.go:105] duration metric: took 4.875282ms to run NodePressure ...
	I0817 22:32:32.377467  254975 start.go:228] waiting for startup goroutines ...
	I0817 22:32:32.377473  254975 start.go:233] waiting for cluster config update ...
	I0817 22:32:32.377483  254975 start.go:242] writing updated cluster config ...
	I0817 22:32:32.377828  254975 ssh_runner.go:195] Run: rm -f paused
	I0817 22:32:32.433865  254975 start.go:600] kubectl: 1.28.0, cluster: 1.16.0 (minor skew: 12)
	I0817 22:32:32.436131  254975 out.go:177] 
	W0817 22:32:32.437621  254975 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0817 22:32:32.439072  254975 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0817 22:32:32.440794  254975 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-294781" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 22:24:06 UTC, ends at Thu 2023-08-17 22:38:42 UTC. --
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.183128919Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d6b5c426-13e6-4cbd-8326-466295a9c334 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.183463062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d6b5c426-13e6-4cbd-8326-466295a9c334 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.223581641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1faa6a77-22ea-4a2f-8f85-9ded010123f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.223691803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1faa6a77-22ea-4a2f-8f85-9ded010123f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.223915376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1faa6a77-22ea-4a2f-8f85-9ded010123f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.265460571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3fc5248e-926e-4ced-b32a-1798910d5f52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.265558865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3fc5248e-926e-4ced-b32a-1798910d5f52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.265759639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3fc5248e-926e-4ced-b32a-1798910d5f52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.281870584Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=78bb8c45-01e5-4c63-9cd2-f7142708758a name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.282132113Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&PodSandboxMetadata{Name:busybox,Uid:120471b2-fc06-44fc-b89c-bdaa40d7bb8d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311100176162780,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:24:52.234062896Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-b54g4,Uid:3fad219a-90a1-4ec1-b6fe-12632c5f1913,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:16923111001678118
37,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:24:52.234066782Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1bfda5655694b65d55b9b4d389794148d1cc17b8956a496a32c97e39d67ed462,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-25p7z,Uid:1069cee0-4d6e-4420-a3e5-c3ca300db03f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311098323656928,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-25p7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1069cee0-4d6e-4420-a3e5-c3ca300db03f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:24:52.2
34060752Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f18e7ab1-0b36-4439-9282-fbc4bf804abc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311094096599829,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-17T22:24:52.234061800Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&PodSandboxMetadata{Name:kube-proxy-pzpk2,Uid:4373b29e-6b11-4c28-bbb4-3d97d2151565,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311094073900060,Labels:map[string]string{controller-revision-hash: 656b5f6545,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b11-4c28-bbb4-3d97d2151565,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2023-08-17T22:24:52.234065877Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-525875,Uid:960faaac9ae3a0d7825b7493a9c82b6f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311086801267309,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.196:8443,kubernetes.io/config.hash: 960faaac9ae3a0d7825b7493a9c82b6f,kubernetes.io/config.seen: 2023-08-17T22:24:46.210559926Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&PodSandboxMetadata{N
ame:kube-controller-manager-no-preload-525875,Uid:b36a07c887c961b04c1a6eb6f19354fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311086795465781,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b36a07c887c961b04c1a6eb6f19354fe,kubernetes.io/config.seen: 2023-08-17T22:24:46.210563650Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-525875,Uid:de7d2eeeb2d0a33c6a24769d16540e4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311086784617770,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-525
875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.196:2379,kubernetes.io/config.hash: de7d2eeeb2d0a33c6a24769d16540e4a,kubernetes.io/config.seen: 2023-08-17T22:24:46.210888499Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-525875,Uid:94776b0de805dc2eb274a1ccba3664d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311086747177337,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb274a1ccba3664d8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 94776b0de805dc2eb274a1ccba3664d8,ku
bernetes.io/config.seen: 2023-08-17T22:24:46.210565148Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=78bb8c45-01e5-4c63-9cd2-f7142708758a name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.283183863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=99198d93-8389-4fb1-a70e-bb77d111d2d3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.283294511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=99198d93-8389-4fb1-a70e-bb77d111d2d3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.283586295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6
b11-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb
274a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.
kubernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.
container.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=99198d93-8389-4fb1-a70e-bb77d111d2d3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.313687049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=42829f43-072b-4c01-af8a-53fa4b16794d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.313775909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=42829f43-072b-4c01-af8a-53fa4b16794d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.314035116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=42829f43-072b-4c01-af8a-53fa4b16794d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.357631875Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9ff641e9-ebca-4c23-8806-f93eac451b16 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.357723289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9ff641e9-ebca-4c23-8806-f93eac451b16 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.357920759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9ff641e9-ebca-4c23-8806-f93eac451b16 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.398625722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3680f7dd-79ae-4002-b6e8-92b294f21e7c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.398720771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3680f7dd-79ae-4002-b6e8-92b294f21e7c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.399010638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3680f7dd-79ae-4002-b6e8-92b294f21e7c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.441787065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8613e2f7-af01-43b4-938e-28bfb40a2215 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.441881448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8613e2f7-af01-43b4-938e-28bfb40a2215 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:38:42 no-preload-525875 crio[732]: time="2023-08-17 22:38:42.442085811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8613e2f7-af01-43b4-938e-28bfb40a2215 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	5e92f33147487       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   2b4d6e984e3d6
	68bdd65247a55       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   ceacda7783d0e
	4b2d6d0a0e671       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   6091987ad77f3
	659e02540293f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   2b4d6e984e3d6
	d5071416ecfc1       cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8                                      13 minutes ago      Running             kube-proxy                1                   7eff5cea7a2d5
	291d84856ee9a       046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd                                      13 minutes ago      Running             kube-scheduler            1                   9ff32d7ae3574
	07f7152c064dc       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   3888db714dfc1
	c3d45374a533d       2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d                                      13 minutes ago      Running             kube-apiserver            1                   256a46760aba1
	8ecbcee30abd9       e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef                                      13 minutes ago      Running             kube-controller-manager   1                   77902597faf87
	
	* 
	* ==> coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55806 - 6050 "HINFO IN 8937173382687744230.8945713894719446716. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014558274s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-525875
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-525875
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=no-preload-525875
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T22_15_44_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 22:15:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-525875
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 22:38:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 22:35:35 +0000   Thu, 17 Aug 2023 22:15:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 22:35:35 +0000   Thu, 17 Aug 2023 22:15:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 22:35:35 +0000   Thu, 17 Aug 2023 22:15:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 22:35:35 +0000   Thu, 17 Aug 2023 22:25:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.196
	  Hostname:    no-preload-525875
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e776e3b16f4c4807aa6ba95a93d58c39
	  System UUID:                e776e3b1-6f4c-4807-aa6b-a95a93d58c39
	  Boot ID:                    48f01ae5-f920-4505-b883-dc0cc5dc6b19
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.0-rc.1
	  Kube-Proxy Version:         v1.28.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5dd5756b68-b54g4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-525875                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-525875             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-525875    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-pzpk2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-525875             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-57f55c9bc5-25p7z              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-525875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-525875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-525875 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-525875 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-525875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-525875 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                22m                kubelet          Node no-preload-525875 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-525875 event: Registered Node no-preload-525875 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-525875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-525875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-525875 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-525875 event: Registered Node no-preload-525875 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug17 22:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071934] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug17 22:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.427864] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140110] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.504956] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.040204] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.110690] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.152209] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.128676] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[  +0.223428] systemd-fstab-generator[717]: Ignoring "noauto" for root device
	[ +30.772289] systemd-fstab-generator[1232]: Ignoring "noauto" for root device
	[ +14.293746] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] <==
	* {"level":"warn","ts":"2023-08-17T22:25:06.156087Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:05.649462Z","time spent":"506.588824ms","remote":"127.0.0.1:39318","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1308,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-wwv75\" mod_revision:457 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-wwv75\" value_size:1249 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-wwv75\" > >"}
	{"level":"info","ts":"2023-08-17T22:25:06.15631Z","caller":"traceutil/trace.go:171","msg":"trace[1623379194] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"504.383964ms","start":"2023-08-17T22:25:05.65191Z","end":"2023-08-17T22:25:06.156294Z","steps":["trace[1623379194] 'process raft request'  (duration: 503.913838ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:25:06.156448Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:05.651894Z","time spent":"504.44008ms","remote":"127.0.0.1:39360","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:524 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2023-08-17T22:25:06.156668Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"479.79619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-525875\" ","response":"range_response_count:1 size:4441"}
	{"level":"info","ts":"2023-08-17T22:25:06.156691Z","caller":"traceutil/trace.go:171","msg":"trace[376130525] range","detail":"{range_begin:/registry/minions/no-preload-525875; range_end:; response_count:1; response_revision:625; }","duration":"479.822268ms","start":"2023-08-17T22:25:05.676863Z","end":"2023-08-17T22:25:06.156685Z","steps":["trace[376130525] 'agreement among raft nodes before linearized reading'  (duration: 479.74218ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:25:06.156707Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:05.676844Z","time spent":"479.859729ms","remote":"127.0.0.1:39296","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4464,"request content":"key:\"/registry/minions/no-preload-525875\" "}
	{"level":"warn","ts":"2023-08-17T22:25:06.156816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"468.639033ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:25:06.15683Z","caller":"traceutil/trace.go:171","msg":"trace[840329666] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:625; }","duration":"468.651873ms","start":"2023-08-17T22:25:05.688173Z","end":"2023-08-17T22:25:06.156824Z","steps":["trace[840329666] 'agreement among raft nodes before linearized reading'  (duration: 468.627123ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:25:06.156841Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:05.688157Z","time spent":"468.681495ms","remote":"127.0.0.1:39258","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-08-17T22:25:29.422903Z","caller":"traceutil/trace.go:171","msg":"trace[1234259991] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"128.15293ms","start":"2023-08-17T22:25:29.294716Z","end":"2023-08-17T22:25:29.422868Z","steps":["trace[1234259991] 'process raft request'  (duration: 127.944397ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T22:25:29.77882Z","caller":"traceutil/trace.go:171","msg":"trace[638718583] linearizableReadLoop","detail":"{readStateIndex:691; appliedIndex:690; }","duration":"350.223706ms","start":"2023-08-17T22:25:29.428582Z","end":"2023-08-17T22:25:29.778806Z","steps":["trace[638718583] 'read index received'  (duration: 308.089994ms)","trace[638718583] 'applied index is now lower than readState.Index'  (duration: 42.132974ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T22:25:29.779007Z","caller":"traceutil/trace.go:171","msg":"trace[221553669] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"475.303542ms","start":"2023-08-17T22:25:29.303695Z","end":"2023-08-17T22:25:29.778998Z","steps":["trace[221553669] 'process raft request'  (duration: 433.066867ms)","trace[221553669] 'compare'  (duration: 41.769491ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T22:25:29.779136Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:29.303678Z","time spent":"475.385127ms","remote":"127.0.0.1:39298","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4055,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-25p7z\" mod_revision:631 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-25p7z\" value_size:3989 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-25p7z\" > >"}
	{"level":"warn","ts":"2023-08-17T22:25:29.779136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.7662ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" ","response":"range_response_count:1 size:782"}
	{"level":"info","ts":"2023-08-17T22:25:29.779461Z","caller":"traceutil/trace.go:171","msg":"trace[720060691] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577; range_end:; response_count:1; response_revision:644; }","duration":"351.093914ms","start":"2023-08-17T22:25:29.428356Z","end":"2023-08-17T22:25:29.77945Z","steps":["trace[720060691] 'agreement among raft nodes before linearized reading'  (duration: 350.749934ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:25:29.779528Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:29.428336Z","time spent":"351.182377ms","remote":"127.0.0.1:39274","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":805,"request content":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" "}
	{"level":"info","ts":"2023-08-17T22:25:30.248594Z","caller":"traceutil/trace.go:171","msg":"trace[635304970] linearizableReadLoop","detail":"{readStateIndex:692; appliedIndex:691; }","duration":"461.067955ms","start":"2023-08-17T22:25:29.787508Z","end":"2023-08-17T22:25:30.248576Z","steps":["trace[635304970] 'read index received'  (duration: 366.035648ms)","trace[635304970] 'applied index is now lower than readState.Index'  (duration: 95.03112ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T22:25:30.248723Z","caller":"traceutil/trace.go:171","msg":"trace[1843546038] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"464.56781ms","start":"2023-08-17T22:25:29.784138Z","end":"2023-08-17T22:25:30.248706Z","steps":["trace[1843546038] 'process raft request'  (duration: 369.46243ms)","trace[1843546038] 'compare'  (duration: 94.738361ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T22:25:30.248845Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:29.784121Z","time spent":"464.666813ms","remote":"127.0.0.1:39274","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":767,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" mod_revision:601 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" value_size:672 lease:4841239881108774533 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" > >"}
	{"level":"warn","ts":"2023-08-17T22:25:30.248861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"461.382774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-525875\" ","response":"range_response_count:1 size:4693"}
	{"level":"info","ts":"2023-08-17T22:25:30.249069Z","caller":"traceutil/trace.go:171","msg":"trace[65102174] range","detail":"{range_begin:/registry/minions/no-preload-525875; range_end:; response_count:1; response_revision:645; }","duration":"461.601268ms","start":"2023-08-17T22:25:29.78746Z","end":"2023-08-17T22:25:30.249061Z","steps":["trace[65102174] 'agreement among raft nodes before linearized reading'  (duration: 461.263036ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:25:30.249134Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:29.78745Z","time spent":"461.672509ms","remote":"127.0.0.1:39296","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4716,"request content":"key:\"/registry/minions/no-preload-525875\" "}
	{"level":"info","ts":"2023-08-17T22:34:50.544578Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":867}
	{"level":"info","ts":"2023-08-17T22:34:50.54854Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":867,"took":"2.983304ms","hash":2385570467}
	{"level":"info","ts":"2023-08-17T22:34:50.548718Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2385570467,"revision":867,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  22:38:42 up 14 min,  0 users,  load average: 0.11, 0.19, 0.17
	Linux no-preload-525875 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] <==
	* I0817 22:34:53.273707       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:34:53.273491       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:34:53.273983       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:34:53.275309       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:35:52.092838       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.123.149:443: connect: connection refused
	I0817 22:35:52.093085       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:35:53.273957       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:35:53.274032       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:35:53.274041       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:35:53.276534       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:35:53.276631       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:35:53.276675       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:36:52.093014       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.123.149:443: connect: connection refused
	I0817 22:36:52.093103       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0817 22:37:52.092659       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.123.149:443: connect: connection refused
	I0817 22:37:52.092716       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:37:53.274578       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:37:53.274765       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:37:53.274797       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:37:53.277287       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:37:53.277475       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:37:53.277488       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] <==
	* I0817 22:33:05.750737       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:33:35.212515       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:33:35.760215       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:34:05.220997       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:34:05.770336       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:34:35.227316       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:34:35.780811       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:35:05.234993       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:35:05.792349       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:35:35.242659       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:35:35.803295       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:36:05.252289       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:36:05.323286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="306.542µs"
	I0817 22:36:05.816315       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0817 22:36:19.303204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="172.016µs"
	E0817 22:36:35.257992       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:36:35.830203       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:37:05.265585       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:37:05.843640       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:37:35.271722       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:37:35.855940       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:38:05.278291       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:38:05.866966       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:38:35.285257       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:38:35.876585       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] <==
	* I0817 22:24:55.042851       1 server_others.go:69] "Using iptables proxy"
	I0817 22:24:55.071865       1 node.go:141] Successfully retrieved node IP: 192.168.61.196
	I0817 22:24:55.115855       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0817 22:24:55.115904       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0817 22:24:55.118787       1 server_others.go:152] "Using iptables Proxier"
	I0817 22:24:55.118858       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 22:24:55.119043       1 server.go:846] "Version info" version="v1.28.0-rc.1"
	I0817 22:24:55.119078       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:24:55.120057       1 config.go:188] "Starting service config controller"
	I0817 22:24:55.120115       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 22:24:55.120134       1 config.go:97] "Starting endpoint slice config controller"
	I0817 22:24:55.120138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 22:24:55.121008       1 config.go:315] "Starting node config controller"
	I0817 22:24:55.121046       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 22:24:55.220751       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 22:24:55.220806       1 shared_informer.go:318] Caches are synced for service config
	I0817 22:24:55.221144       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] <==
	* I0817 22:24:50.120999       1 serving.go:348] Generated self-signed cert in-memory
	W0817 22:24:52.143603       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 22:24:52.143886       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 22:24:52.144002       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 22:24:52.144111       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 22:24:52.286736       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0-rc.1"
	I0817 22:24:52.286792       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:24:52.290594       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0817 22:24:52.294497       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0817 22:24:52.294688       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 22:24:52.294730       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 22:24:52.395019       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 22:24:06 UTC, ends at Thu 2023-08-17 22:38:43 UTC. --
	Aug 17 22:35:51 no-preload-525875 kubelet[1238]: E0817 22:35:51.322742    1238 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 17 22:35:51 no-preload-525875 kubelet[1238]: E0817 22:35:51.322825    1238 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 17 22:35:51 no-preload-525875 kubelet[1238]: E0817 22:35:51.323114    1238 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pklbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-25p7z_kube-system(1069cee0-4d6e-4420-a3e5-c3ca300db03f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 17 22:35:51 no-preload-525875 kubelet[1238]: E0817 22:35:51.323166    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:36:05 no-preload-525875 kubelet[1238]: E0817 22:36:05.282932    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:36:19 no-preload-525875 kubelet[1238]: E0817 22:36:19.282486    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:36:32 no-preload-525875 kubelet[1238]: E0817 22:36:32.283480    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:36:43 no-preload-525875 kubelet[1238]: E0817 22:36:43.282335    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:36:46 no-preload-525875 kubelet[1238]: E0817 22:36:46.309682    1238 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:36:46 no-preload-525875 kubelet[1238]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:36:46 no-preload-525875 kubelet[1238]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:36:46 no-preload-525875 kubelet[1238]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 17 22:36:54 no-preload-525875 kubelet[1238]: E0817 22:36:54.282659    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:37:06 no-preload-525875 kubelet[1238]: E0817 22:37:06.282608    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:37:19 no-preload-525875 kubelet[1238]: E0817 22:37:19.283438    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:37:30 no-preload-525875 kubelet[1238]: E0817 22:37:30.283303    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:37:44 no-preload-525875 kubelet[1238]: E0817 22:37:44.282092    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:37:46 no-preload-525875 kubelet[1238]: E0817 22:37:46.310261    1238 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:37:46 no-preload-525875 kubelet[1238]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:37:46 no-preload-525875 kubelet[1238]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:37:46 no-preload-525875 kubelet[1238]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 17 22:37:58 no-preload-525875 kubelet[1238]: E0817 22:37:58.282104    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:38:11 no-preload-525875 kubelet[1238]: E0817 22:38:11.282823    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:38:26 no-preload-525875 kubelet[1238]: E0817 22:38:26.283068    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:38:41 no-preload-525875 kubelet[1238]: E0817 22:38:41.282975    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	
	* 
	* ==> storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] <==
	* I0817 22:25:25.728191       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 22:25:25.749533       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 22:25:25.749644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 22:25:43.161255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 22:25:43.161961       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7841fb22-cdbf-45fb-a010-e0a54a3a2824", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-525875_785a7956-f4d2-4576-b4a5-4686072cc982 became leader
	I0817 22:25:43.162085       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-525875_785a7956-f4d2-4576-b4a5-4686072cc982!
	I0817 22:25:43.263172       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-525875_785a7956-f4d2-4576-b4a5-4686072cc982!
	
	* 
	* ==> storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] <==
	* I0817 22:24:55.051184       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0817 22:25:25.056168       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-525875 -n no-preload-525875
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-525875 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-25p7z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-525875 describe pod metrics-server-57f55c9bc5-25p7z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-525875 describe pod metrics-server-57f55c9bc5-25p7z: exit status 1 (79.68164ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-25p7z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-525875 describe pod metrics-server-57f55c9bc5-25p7z: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0817 22:30:20.385509  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:30:31.664640  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-437183 -n embed-certs-437183
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-08-17 22:39:19.570926277 +0000 UTC m=+5333.211693999
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-437183 -n embed-certs-437183
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-437183 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-437183 logs -n 25: (1.718397439s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-975779 sudo cat                              | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo find                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo crio                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-975779                                       | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-340676 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | disable-driver-mounts-340676                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:17 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-294781        | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-525875             | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-437183            | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-321287  | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-294781             | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-525875                  | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:29 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-437183                 | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-321287       | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC | 17 Aug 23 22:30 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 22:20:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 22:20:16.712686  255491 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:20:16.712825  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.712835  255491 out.go:309] Setting ErrFile to fd 2...
	I0817 22:20:16.712839  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.713062  255491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:20:16.713667  255491 out.go:303] Setting JSON to false
	I0817 22:20:16.714624  255491 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25342,"bootTime":1692285475,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:20:16.714682  255491 start.go:138] virtualization: kvm guest
	I0817 22:20:16.717535  255491 out.go:177] * [default-k8s-diff-port-321287] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:20:16.719151  255491 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:20:16.720536  255491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:20:16.719158  255491 notify.go:220] Checking for updates...
	I0817 22:20:16.724470  255491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:20:16.726182  255491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:20:16.727902  255491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:20:16.729516  255491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:20:16.731373  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:20:16.731749  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.731825  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.746961  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0817 22:20:16.747404  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.748088  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.748116  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.748449  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.748618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.748847  255491 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:20:16.749194  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.749239  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.764882  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I0817 22:20:16.765357  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.765874  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.765901  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.766289  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.766480  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.802457  255491 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:20:16.804215  255491 start.go:298] selected driver: kvm2
	I0817 22:20:16.804235  255491 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 Cl
usterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.804379  255491 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:20:16.805157  255491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.805248  255491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:20:16.821166  255491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:20:16.821564  255491 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 22:20:16.821606  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:20:16.821619  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:20:16.821631  255491 start_flags.go:319] config:
	{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.821815  255491 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.823863  255491 out.go:177] * Starting control plane node default-k8s-diff-port-321287 in cluster default-k8s-diff-port-321287
	I0817 22:20:16.825296  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:20:16.825350  255491 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 22:20:16.825365  255491 cache.go:57] Caching tarball of preloaded images
	I0817 22:20:16.825521  255491 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 22:20:16.825536  255491 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 22:20:16.825660  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:20:16.825870  255491 start.go:365] acquiring machines lock for default-k8s-diff-port-321287: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:20:17.790384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:20.862432  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:26.942301  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:30.014393  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:36.094411  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:39.166376  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:45.246382  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:48.318418  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:54.398388  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:57.470394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:03.550380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:06.622365  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:12.702351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:15.774370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:21.854413  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:24.926351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:31.006415  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:34.078332  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:40.158437  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:43.230410  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:49.310359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:52.382386  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:58.462394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:01.534395  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:07.614359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:10.686384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:16.766363  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:19.838352  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:25.918380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:28.990416  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:35.070383  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:38.142364  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:44.222341  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:47.294387  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:53.374378  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:56.446375  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:02.526335  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:05.598406  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:11.678435  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:14.750370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:20.830484  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:23.902346  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:29.982456  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:33.054379  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:39.134436  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:42.206472  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:48.286396  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:51.358348  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:54.362645  255057 start.go:369] acquired machines lock for "no-preload-525875" in 4m31.301140971s
	I0817 22:23:54.362883  255057 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:23:54.362929  255057 fix.go:54] fixHost starting: 
	I0817 22:23:54.363423  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:23:54.363467  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:23:54.379127  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0817 22:23:54.379699  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:23:54.380334  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:23:54.380357  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:23:54.380797  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:23:54.381004  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:23:54.381209  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:23:54.383099  255057 fix.go:102] recreateIfNeeded on no-preload-525875: state=Stopped err=<nil>
	I0817 22:23:54.383145  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	W0817 22:23:54.383332  255057 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:23:54.385187  255057 out.go:177] * Restarting existing kvm2 VM for "no-preload-525875" ...
	I0817 22:23:54.360325  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:23:54.360394  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:23:54.362467  254975 machine.go:91] provisioned docker machine in 4m37.411699893s
	I0817 22:23:54.362520  254975 fix.go:56] fixHost completed within 4m37.434281244s
	I0817 22:23:54.362529  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 4m37.434304432s
	W0817 22:23:54.362577  254975 start.go:672] error starting host: provision: host is not running
	W0817 22:23:54.363017  254975 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0817 22:23:54.363033  254975 start.go:687] Will try again in 5 seconds ...
	I0817 22:23:54.386615  255057 main.go:141] libmachine: (no-preload-525875) Calling .Start
	I0817 22:23:54.386791  255057 main.go:141] libmachine: (no-preload-525875) Ensuring networks are active...
	I0817 22:23:54.387647  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network default is active
	I0817 22:23:54.387973  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network mk-no-preload-525875 is active
	I0817 22:23:54.388332  255057 main.go:141] libmachine: (no-preload-525875) Getting domain xml...
	I0817 22:23:54.389183  255057 main.go:141] libmachine: (no-preload-525875) Creating domain...
	I0817 22:23:55.639391  255057 main.go:141] libmachine: (no-preload-525875) Waiting to get IP...
	I0817 22:23:55.640405  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.640824  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.640956  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.640807  256033 retry.go:31] will retry after 256.854902ms: waiting for machine to come up
	I0817 22:23:55.899499  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.900003  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.900027  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.899976  256033 retry.go:31] will retry after 327.686689ms: waiting for machine to come up
	I0817 22:23:56.229604  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.230132  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.230156  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.230040  256033 retry.go:31] will retry after 464.52975ms: waiting for machine to come up
	I0817 22:23:56.695962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.696359  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.696397  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.696313  256033 retry.go:31] will retry after 556.975938ms: waiting for machine to come up
	I0817 22:23:57.255156  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.255625  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.255664  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.255564  256033 retry.go:31] will retry after 654.756806ms: waiting for machine to come up
	I0817 22:23:57.911407  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.911781  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.911805  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.911733  256033 retry.go:31] will retry after 915.751745ms: waiting for machine to come up
	I0817 22:23:59.364671  254975 start.go:365] acquiring machines lock for old-k8s-version-294781: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:23:58.828834  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:58.829178  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:58.829236  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:58.829153  256033 retry.go:31] will retry after 1.176413613s: waiting for machine to come up
	I0817 22:24:00.006988  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:00.007533  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:00.007603  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:00.007525  256033 retry.go:31] will retry after 1.031006631s: waiting for machine to come up
	I0817 22:24:01.039920  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:01.040354  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:01.040386  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:01.040293  256033 retry.go:31] will retry after 1.781447675s: waiting for machine to come up
	I0817 22:24:02.823240  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:02.823711  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:02.823755  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:02.823652  256033 retry.go:31] will retry after 1.47392319s: waiting for machine to come up
	I0817 22:24:04.299094  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:04.299543  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:04.299572  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:04.299479  256033 retry.go:31] will retry after 1.990284782s: waiting for machine to come up
	I0817 22:24:06.292369  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:06.292831  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:06.292862  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:06.292749  256033 retry.go:31] will retry after 3.34318874s: waiting for machine to come up
	I0817 22:24:09.637907  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:09.638389  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:09.638423  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:09.638335  256033 retry.go:31] will retry after 3.298106143s: waiting for machine to come up
	I0817 22:24:12.939215  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939668  255057 main.go:141] libmachine: (no-preload-525875) Found IP for machine: 192.168.61.196
	I0817 22:24:12.939692  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has current primary IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939709  255057 main.go:141] libmachine: (no-preload-525875) Reserving static IP address...
	I0817 22:24:12.940293  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.940330  255057 main.go:141] libmachine: (no-preload-525875) Reserved static IP address: 192.168.61.196
	I0817 22:24:12.940347  255057 main.go:141] libmachine: (no-preload-525875) DBG | skip adding static IP to network mk-no-preload-525875 - found existing host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"}
	I0817 22:24:12.940364  255057 main.go:141] libmachine: (no-preload-525875) DBG | Getting to WaitForSSH function...
	I0817 22:24:12.940381  255057 main.go:141] libmachine: (no-preload-525875) Waiting for SSH to be available...
	I0817 22:24:12.942523  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.942835  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.942870  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.943013  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH client type: external
	I0817 22:24:12.943058  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa (-rw-------)
	I0817 22:24:12.943104  255057 main.go:141] libmachine: (no-preload-525875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:12.943125  255057 main.go:141] libmachine: (no-preload-525875) DBG | About to run SSH command:
	I0817 22:24:12.943135  255057 main.go:141] libmachine: (no-preload-525875) DBG | exit 0
	I0817 22:24:14.123211  255215 start.go:369] acquired machines lock for "embed-certs-437183" in 4m31.345681226s
	I0817 22:24:14.123281  255215 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:14.123298  255215 fix.go:54] fixHost starting: 
	I0817 22:24:14.123769  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:14.123822  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:14.141321  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0817 22:24:14.141722  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:14.142372  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:24:14.142409  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:14.142871  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:14.143076  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:14.143300  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:24:14.144928  255215 fix.go:102] recreateIfNeeded on embed-certs-437183: state=Stopped err=<nil>
	I0817 22:24:14.144960  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	W0817 22:24:14.145216  255215 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:14.148036  255215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-437183" ...
	I0817 22:24:13.033987  255057 main.go:141] libmachine: (no-preload-525875) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:13.034450  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetConfigRaw
	I0817 22:24:13.035251  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.037756  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038141  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.038176  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038475  255057 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/config.json ...
	I0817 22:24:13.038679  255057 machine.go:88] provisioning docker machine ...
	I0817 22:24:13.038704  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.038922  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039086  255057 buildroot.go:166] provisioning hostname "no-preload-525875"
	I0817 22:24:13.039109  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039238  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.041385  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041666  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.041698  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041838  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.042022  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042206  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042396  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.042612  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.043170  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.043189  255057 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-525875 && echo "no-preload-525875" | sudo tee /etc/hostname
	I0817 22:24:13.177388  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-525875
	
	I0817 22:24:13.177433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.180249  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180571  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.180599  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180808  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.181054  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181224  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181371  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.181544  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.181969  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.181994  255057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-525875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-525875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-525875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:13.307614  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:13.307675  255057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:13.307719  255057 buildroot.go:174] setting up certificates
	I0817 22:24:13.307731  255057 provision.go:83] configureAuth start
	I0817 22:24:13.307745  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.308044  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.311084  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311457  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.311491  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311665  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.313712  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314066  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.314101  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314252  255057 provision.go:138] copyHostCerts
	I0817 22:24:13.314354  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:13.314397  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:13.314495  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:13.314610  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:13.314623  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:13.314661  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:13.314735  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:13.314745  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:13.314779  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:13.314841  255057 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.no-preload-525875 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube no-preload-525875]
	I0817 22:24:13.395589  255057 provision.go:172] copyRemoteCerts
	I0817 22:24:13.395693  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:13.395724  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.398603  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.398936  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.398972  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.399154  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.399379  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.399566  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.399717  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.487194  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:13.510918  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 22:24:13.534013  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:13.556876  255057 provision.go:86] duration metric: configureAuth took 249.122979ms
	I0817 22:24:13.556910  255057 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:13.557143  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:13.557265  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.560140  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560483  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.560514  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560748  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.560965  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561143  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561274  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.561516  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.562128  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.562155  255057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:13.863145  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:13.863181  255057 machine.go:91] provisioned docker machine in 824.487372ms
	I0817 22:24:13.863206  255057 start.go:300] post-start starting for "no-preload-525875" (driver="kvm2")
	I0817 22:24:13.863219  255057 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:13.863247  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.863636  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:13.863681  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.866612  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.866950  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.867000  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.867115  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.867333  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.867524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.867695  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.957157  255057 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:13.961765  255057 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:13.961801  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:13.961919  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:13.962002  255057 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:13.962116  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:13.971105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:13.999336  255057 start.go:303] post-start completed in 136.111451ms
	I0817 22:24:13.999367  255057 fix.go:56] fixHost completed within 19.636437946s
	I0817 22:24:13.999391  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.002294  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002689  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.002717  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002995  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.003236  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003572  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.003744  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:14.004145  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:14.004160  255057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:14.122987  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311054.069328214
	
	I0817 22:24:14.123011  255057 fix.go:206] guest clock: 1692311054.069328214
	I0817 22:24:14.123019  255057 fix.go:219] Guest: 2023-08-17 22:24:14.069328214 +0000 UTC Remote: 2023-08-17 22:24:13.999370872 +0000 UTC m=+291.082280559 (delta=69.957342ms)
	I0817 22:24:14.123080  255057 fix.go:190] guest clock delta is within tolerance: 69.957342ms
	I0817 22:24:14.123087  255057 start.go:83] releasing machines lock for "no-preload-525875", held for 19.760401588s
	I0817 22:24:14.123125  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.123445  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:14.126573  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.126925  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.126962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.127146  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127781  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127974  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.128071  255057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:14.128125  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.128226  255057 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:14.128258  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.131020  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131333  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131367  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131390  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.131715  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.131789  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131829  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131895  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.131975  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.132057  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.132156  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.132272  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.132425  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.219665  255057 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:14.247437  255057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:14.400674  255057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:14.408384  255057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:14.408502  255057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:14.423811  255057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:14.423860  255057 start.go:466] detecting cgroup driver to use...
	I0817 22:24:14.423953  255057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:14.436628  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:14.448671  255057 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:14.448765  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:14.461946  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:14.475294  255057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:14.581194  255057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:14.708045  255057 docker.go:212] disabling docker service ...
	I0817 22:24:14.708110  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:14.722033  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:14.733323  255057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:14.857587  255057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:14.980798  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:14.994728  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:15.012428  255057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:15.012505  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.021683  255057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:15.021763  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.031095  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.040825  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.050770  255057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:15.060644  255057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:15.068941  255057 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:15.069022  255057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:15.081634  255057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:15.090552  255057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:15.205174  255057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:15.383127  255057 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:15.383224  255057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:15.391893  255057 start.go:534] Will wait 60s for crictl version
	I0817 22:24:15.391983  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.398121  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:15.450273  255057 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:15.450368  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.506757  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.560170  255057 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.1 ...
	I0817 22:24:14.149845  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Start
	I0817 22:24:14.150032  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring networks are active...
	I0817 22:24:14.150803  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network default is active
	I0817 22:24:14.151110  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network mk-embed-certs-437183 is active
	I0817 22:24:14.151492  255215 main.go:141] libmachine: (embed-certs-437183) Getting domain xml...
	I0817 22:24:14.152247  255215 main.go:141] libmachine: (embed-certs-437183) Creating domain...
	I0817 22:24:15.472135  255215 main.go:141] libmachine: (embed-certs-437183) Waiting to get IP...
	I0817 22:24:15.473014  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.473413  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.473492  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.473421  256157 retry.go:31] will retry after 194.38634ms: waiting for machine to come up
	I0817 22:24:15.670047  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.670479  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.670528  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.670445  256157 retry.go:31] will retry after 332.988154ms: waiting for machine to come up
	I0817 22:24:16.005357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.005862  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.005898  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.005790  256157 retry.go:31] will retry after 376.364025ms: waiting for machine to come up
	I0817 22:24:16.384423  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.384866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.384916  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.384805  256157 retry.go:31] will retry after 392.048125ms: waiting for machine to come up
	I0817 22:24:16.778356  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.778744  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.778780  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.778683  256157 retry.go:31] will retry after 688.962088ms: waiting for machine to come up
	I0817 22:24:17.469767  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:17.470257  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:17.470287  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:17.470211  256157 retry.go:31] will retry after 660.617465ms: waiting for machine to come up
	I0817 22:24:15.561695  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:15.564750  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565097  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:15.565127  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565409  255057 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:15.569673  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:15.584980  255057 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:24:15.585030  255057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:15.617365  255057 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0817 22:24:15.617396  255057 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.0-rc.1 registry.k8s.io/kube-controller-manager:v1.28.0-rc.1 registry.k8s.io/kube-scheduler:v1.28.0-rc.1 registry.k8s.io/kube-proxy:v1.28.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:24:15.617470  255057 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.617497  255057 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.617529  255057 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.617606  255057 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.617541  255057 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.617637  255057 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0817 22:24:15.617507  255057 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.617985  255057 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619154  255057 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0817 22:24:15.619338  255057 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619355  255057 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.619350  255057 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.619369  255057 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.619335  255057 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.619381  255057 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.619414  255057 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.793551  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.793935  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.796339  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.797436  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.806385  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.813161  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0817 22:24:15.840200  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.935464  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.940863  255057 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0817 22:24:15.940940  255057 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.940881  255057 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" does not exist at hash "046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd" in container runtime
	I0817 22:24:15.941028  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.941031  255057 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.941115  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952609  255057 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" does not exist at hash "e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef" in container runtime
	I0817 22:24:15.952687  255057 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.952709  255057 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0817 22:24:15.952741  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952751  255057 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.952790  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.007640  255057 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" does not exist at hash "2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d" in container runtime
	I0817 22:24:16.007686  255057 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.007740  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099763  255057 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.0-rc.1" does not exist at hash "cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8" in container runtime
	I0817 22:24:16.099817  255057 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.099873  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099909  255057 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0817 22:24:16.099969  255057 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.099980  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:16.100019  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.100052  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:16.100127  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:16.100145  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:16.100198  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.105175  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.197301  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0817 22:24:16.197377  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197418  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197432  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197437  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.197476  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.197421  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:16.197520  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197535  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.214043  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0817 22:24:16.214189  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:16.225659  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1 (exists)
	I0817 22:24:16.225690  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225750  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225882  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.225973  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.229070  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1 (exists)
	I0817 22:24:16.229235  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1 (exists)
	I0817 22:24:16.258828  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0817 22:24:16.258905  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0817 22:24:16.258990  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0817 22:24:16.259013  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:18.132851  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:18.133243  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:18.133310  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:18.133225  256157 retry.go:31] will retry after 900.178694ms: waiting for machine to come up
	I0817 22:24:19.035179  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:19.035579  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:19.035615  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:19.035514  256157 retry.go:31] will retry after 1.198702878s: waiting for machine to come up
	I0817 22:24:20.236711  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:20.237240  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:20.237273  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:20.237201  256157 retry.go:31] will retry after 1.809846012s: waiting for machine to come up
	I0817 22:24:22.048866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:22.049357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:22.049392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:22.049300  256157 retry.go:31] will retry after 1.671738979s: waiting for machine to come up
	I0817 22:24:18.395405  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1: (2.169611406s)
	I0817 22:24:18.395443  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1 from cache
	I0817 22:24:18.395478  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (2.169478272s)
	I0817 22:24:18.395493  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.136469625s)
	I0817 22:24:18.395493  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:18.395509  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0817 22:24:18.395512  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1 (exists)
	I0817 22:24:18.395560  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:20.871009  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1: (2.475415377s)
	I0817 22:24:20.871043  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1 from cache
	I0817 22:24:20.871073  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:20.871129  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:23.722312  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:23.722829  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:23.722864  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:23.722757  256157 retry.go:31] will retry after 1.856182792s: waiting for machine to come up
	I0817 22:24:25.580432  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:25.580936  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:25.580969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:25.580873  256157 retry.go:31] will retry after 2.404448523s: waiting for machine to come up
	I0817 22:24:23.529377  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1: (2.658213494s)
	I0817 22:24:23.529418  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1 from cache
	I0817 22:24:23.529456  255057 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:23.529532  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:24.907071  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.377507339s)
	I0817 22:24:24.907105  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0817 22:24:24.907135  255057 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:24.907203  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:27.988784  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:27.989226  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:27.989252  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:27.989214  256157 retry.go:31] will retry after 4.145677854s: waiting for machine to come up
	I0817 22:24:32.139031  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139722  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has current primary IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139755  255215 main.go:141] libmachine: (embed-certs-437183) Found IP for machine: 192.168.39.186
	I0817 22:24:32.139768  255215 main.go:141] libmachine: (embed-certs-437183) Reserving static IP address...
	I0817 22:24:32.140361  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.140408  255215 main.go:141] libmachine: (embed-certs-437183) Reserved static IP address: 192.168.39.186
	I0817 22:24:32.140428  255215 main.go:141] libmachine: (embed-certs-437183) DBG | skip adding static IP to network mk-embed-certs-437183 - found existing host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"}
	I0817 22:24:32.140450  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Getting to WaitForSSH function...
	I0817 22:24:32.140465  255215 main.go:141] libmachine: (embed-certs-437183) Waiting for SSH to be available...
	I0817 22:24:32.142752  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143141  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.143192  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143343  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH client type: external
	I0817 22:24:32.143392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa (-rw-------)
	I0817 22:24:32.143431  255215 main.go:141] libmachine: (embed-certs-437183) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:32.143459  255215 main.go:141] libmachine: (embed-certs-437183) DBG | About to run SSH command:
	I0817 22:24:32.143475  255215 main.go:141] libmachine: (embed-certs-437183) DBG | exit 0
	I0817 22:24:32.246211  255215 main.go:141] libmachine: (embed-certs-437183) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:32.246582  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetConfigRaw
	I0817 22:24:32.247284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.249789  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250204  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.250237  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250567  255215 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/config.json ...
	I0817 22:24:32.250808  255215 machine.go:88] provisioning docker machine ...
	I0817 22:24:32.250831  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:32.251049  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251209  255215 buildroot.go:166] provisioning hostname "embed-certs-437183"
	I0817 22:24:32.251230  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251344  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.253729  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254094  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.254124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254276  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.254434  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254654  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254817  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.254981  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.255466  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.255481  255215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-437183 && echo "embed-certs-437183" | sudo tee /etc/hostname
	I0817 22:24:32.412247  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-437183
	
	I0817 22:24:32.412284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.415194  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415508  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.415561  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415666  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.415910  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416113  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416297  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.416501  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.417004  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.417024  255215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-437183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-437183/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-437183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:32.559200  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:32.559253  255215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:32.559282  255215 buildroot.go:174] setting up certificates
	I0817 22:24:32.559299  255215 provision.go:83] configureAuth start
	I0817 22:24:32.559313  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.559696  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.562469  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.562960  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.562989  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.563141  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.565760  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566120  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.566178  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566344  255215 provision.go:138] copyHostCerts
	I0817 22:24:32.566427  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:32.566443  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:32.566504  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:32.566633  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:32.566642  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:32.566676  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:32.566730  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:32.566738  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:32.566755  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:32.566803  255215 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-437183 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube embed-certs-437183]
	I0817 22:24:31.437386  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.530148826s)
	I0817 22:24:31.437453  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0817 22:24:31.437478  255057 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:31.437578  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:32.398228  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0817 22:24:32.398294  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:32.398359  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:33.487487  255491 start.go:369] acquired machines lock for "default-k8s-diff-port-321287" in 4m16.661569765s
	I0817 22:24:33.487552  255491 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:33.487569  255491 fix.go:54] fixHost starting: 
	I0817 22:24:33.488059  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:33.488104  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:33.506430  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0817 22:24:33.506958  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:33.507587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:24:33.507618  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:33.508078  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:33.508296  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:33.508471  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:24:33.510492  255491 fix.go:102] recreateIfNeeded on default-k8s-diff-port-321287: state=Stopped err=<nil>
	I0817 22:24:33.510539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	W0817 22:24:33.510738  255491 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:33.512965  255491 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-321287" ...
	I0817 22:24:32.687763  255215 provision.go:172] copyRemoteCerts
	I0817 22:24:32.687835  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:32.687864  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.690614  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.690921  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.690963  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.691253  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.691469  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.691631  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.691745  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:32.788388  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:32.811861  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:32.835407  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0817 22:24:32.858542  255215 provision.go:86] duration metric: configureAuth took 299.225654ms
	I0817 22:24:32.858581  255215 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:32.858850  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:32.858989  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.861726  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862140  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.862186  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862436  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.862717  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.862961  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.863135  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.863321  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.863744  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.863762  255215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:33.202904  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:33.202942  255215 machine.go:91] provisioned docker machine in 952.11856ms
	I0817 22:24:33.202986  255215 start.go:300] post-start starting for "embed-certs-437183" (driver="kvm2")
	I0817 22:24:33.203002  255215 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:33.203039  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.203427  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:33.203465  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.206544  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.206969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.207004  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.207154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.207407  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.207591  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.207747  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.304648  255215 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:33.309404  255215 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:33.309435  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:33.309536  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:33.309635  255215 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:33.309752  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:33.318682  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:33.343830  255215 start.go:303] post-start completed in 140.8201ms
	I0817 22:24:33.343870  255215 fix.go:56] fixHost completed within 19.220571855s
	I0817 22:24:33.343901  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.347196  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347625  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.347658  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347927  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.348154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348336  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348487  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.348741  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:33.349346  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:33.349361  255215 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:33.487290  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311073.433845199
	
	I0817 22:24:33.487319  255215 fix.go:206] guest clock: 1692311073.433845199
	I0817 22:24:33.487331  255215 fix.go:219] Guest: 2023-08-17 22:24:33.433845199 +0000 UTC Remote: 2023-08-17 22:24:33.343875474 +0000 UTC m=+290.714391364 (delta=89.969725ms)
	I0817 22:24:33.487370  255215 fix.go:190] guest clock delta is within tolerance: 89.969725ms
	I0817 22:24:33.487378  255215 start.go:83] releasing machines lock for "embed-certs-437183", held for 19.364124776s
	I0817 22:24:33.487412  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.487714  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:33.490444  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.490945  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.490975  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.491191  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492024  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492278  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492378  255215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:33.492440  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.492569  255215 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:33.492600  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.495461  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495742  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495836  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.495879  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.496130  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496147  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496287  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496341  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496445  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496604  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496605  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496792  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.496886  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.634234  255215 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:33.642529  255215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:33.802107  255215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:33.808439  255215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:33.808520  255215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:33.823947  255215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:33.823975  255215 start.go:466] detecting cgroup driver to use...
	I0817 22:24:33.824058  255215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:33.839665  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:33.854435  255215 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:33.854512  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:33.871530  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:33.886466  255215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:34.017312  255215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:34.152720  255215 docker.go:212] disabling docker service ...
	I0817 22:24:34.152811  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:34.170506  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:34.186072  255215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:34.327678  255215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:34.450774  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:34.468330  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:34.491610  255215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:34.491684  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.506266  255215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:34.506360  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.517471  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.531351  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.542363  255215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:34.553383  255215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:34.562937  255215 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:34.563029  255215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:34.575978  255215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:34.588500  255215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:34.715821  255215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:34.912771  255215 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:34.912853  255215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:34.918377  255215 start.go:534] Will wait 60s for crictl version
	I0817 22:24:34.918445  255215 ssh_runner.go:195] Run: which crictl
	I0817 22:24:34.922462  255215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:34.962654  255215 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:34.962754  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.020574  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.078516  255215 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:33.514448  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Start
	I0817 22:24:33.514667  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring networks are active...
	I0817 22:24:33.515504  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network default is active
	I0817 22:24:33.515973  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network mk-default-k8s-diff-port-321287 is active
	I0817 22:24:33.516607  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Getting domain xml...
	I0817 22:24:33.517407  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Creating domain...
	I0817 22:24:35.032992  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting to get IP...
	I0817 22:24:35.034213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034833  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.034747  256286 retry.go:31] will retry after 255.561446ms: waiting for machine to come up
	I0817 22:24:35.292497  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293071  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293110  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.293035  256286 retry.go:31] will retry after 265.433217ms: waiting for machine to come up
	I0817 22:24:35.560591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561221  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.561138  256286 retry.go:31] will retry after 429.726379ms: waiting for machine to come up
	I0817 22:24:35.993046  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993573  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.993482  256286 retry.go:31] will retry after 583.273043ms: waiting for machine to come up
	I0817 22:24:36.578452  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578943  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578983  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:36.578889  256286 retry.go:31] will retry after 504.577651ms: waiting for machine to come up
	I0817 22:24:35.080561  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:35.083955  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084338  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:35.084376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084624  255215 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:35.088994  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:35.104758  255215 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:35.104814  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:35.140529  255215 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:35.140606  255215 ssh_runner.go:195] Run: which lz4
	I0817 22:24:35.144869  255215 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:35.149131  255215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:35.149168  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:24:37.067793  255215 crio.go:444] Took 1.922962 seconds to copy over tarball
	I0817 22:24:37.067867  255215 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:34.276465  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (1.878070898s)
	I0817 22:24:34.276495  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1 from cache
	I0817 22:24:34.276528  255057 cache_images.go:123] Successfully loaded all cached images
	I0817 22:24:34.276535  255057 cache_images.go:92] LoadImages completed in 18.659123421s
	I0817 22:24:34.276651  255057 ssh_runner.go:195] Run: crio config
	I0817 22:24:34.349440  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:34.349470  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:34.349525  255057 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:34.349559  255057 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8443 KubernetesVersion:v1.28.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-525875 NodeName:no-preload-525875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:34.349737  255057 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-525875"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:34.349852  255057 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-525875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:34.349927  255057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0-rc.1
	I0817 22:24:34.361082  255057 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:34.361211  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:34.370571  255057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0817 22:24:34.390596  255057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 22:24:34.409602  255057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0817 22:24:34.431076  255057 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:34.435869  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:34.448753  255057 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875 for IP: 192.168.61.196
	I0817 22:24:34.448854  255057 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:34.449077  255057 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:34.449125  255057 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:34.449229  255057 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/client.key
	I0817 22:24:34.449287  255057 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key.0d67e2f2
	I0817 22:24:34.449320  255057 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key
	I0817 22:24:34.449438  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:34.449466  255057 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:34.449476  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:34.449499  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:34.449523  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:34.449545  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:34.449586  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:34.450600  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:34.481454  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:24:34.514638  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:34.539306  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:24:34.565390  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:34.595648  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:34.628105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:34.654925  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:34.684138  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:34.709433  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:34.736933  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:34.772217  255057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:34.790940  255057 ssh_runner.go:195] Run: openssl version
	I0817 22:24:34.800419  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:34.811545  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819623  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819697  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.825793  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:34.836531  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:34.847239  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852331  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852394  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.861659  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:34.871817  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:34.883257  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889654  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889728  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.897773  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:34.909259  255057 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:34.914775  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:34.921549  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:34.928370  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:34.934849  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:34.941470  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:34.949932  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:34.956863  255057 kubeadm.go:404] StartCluster: {Name:no-preload-525875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525
875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:34.957036  255057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:34.957123  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:35.005195  255057 cri.go:89] found id: ""
	I0817 22:24:35.005282  255057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:35.015727  255057 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:35.015754  255057 kubeadm.go:636] restartCluster start
	I0817 22:24:35.015821  255057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:35.025333  255057 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.026796  255057 kubeconfig.go:92] found "no-preload-525875" server: "https://192.168.61.196:8443"
	I0817 22:24:35.030361  255057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:35.040698  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.040754  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.055650  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.055675  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.055719  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.066812  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.567215  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.567291  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.580471  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.066958  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.067035  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.081758  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.567234  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.567320  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.582474  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.066970  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.067060  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.079066  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.567780  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.567887  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.583652  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.085672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086184  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086222  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.086130  256286 retry.go:31] will retry after 660.028004ms: waiting for machine to come up
	I0817 22:24:37.747563  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748056  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748086  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.748020  256286 retry.go:31] will retry after 798.952498ms: waiting for machine to come up
	I0817 22:24:38.548762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549243  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549276  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:38.549193  256286 retry.go:31] will retry after 1.15249289s: waiting for machine to come up
	I0817 22:24:39.703164  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703739  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703773  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:39.703675  256286 retry.go:31] will retry after 1.300284471s: waiting for machine to come up
	I0817 22:24:41.006289  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006781  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006814  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:41.006717  256286 retry.go:31] will retry after 1.500753962s: waiting for machine to come up
	I0817 22:24:40.155737  255215 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.087825588s)
	I0817 22:24:40.155771  255215 crio.go:451] Took 3.087946 seconds to extract the tarball
	I0817 22:24:40.155784  255215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:24:40.196940  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:40.238837  255215 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:24:40.238863  255215 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:24:40.238934  255215 ssh_runner.go:195] Run: crio config
	I0817 22:24:40.302526  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:24:40.302552  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:40.302572  255215 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:40.302593  255215 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-437183 NodeName:embed-certs-437183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:40.302793  255215 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-437183"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:40.302860  255215 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-437183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:40.302914  255215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:24:40.312428  255215 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:40.312517  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:40.321824  255215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0817 22:24:40.340069  255215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:24:40.358609  255215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0817 22:24:40.376546  255215 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:40.380576  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:40.394264  255215 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183 for IP: 192.168.39.186
	I0817 22:24:40.394310  255215 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:40.394509  255215 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:40.394569  255215 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:40.394678  255215 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/client.key
	I0817 22:24:40.394749  255215 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key.d0691019
	I0817 22:24:40.394810  255215 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key
	I0817 22:24:40.394956  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:40.394999  255215 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:40.395013  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:40.395056  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:40.395096  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:40.395127  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:40.395197  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:40.396122  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:40.421809  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:24:40.447412  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:40.472678  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:24:40.501303  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:40.528016  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:40.553741  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:40.581792  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:40.609270  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:40.634901  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:40.659698  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:40.685767  255215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:40.704114  255215 ssh_runner.go:195] Run: openssl version
	I0817 22:24:40.709921  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:40.720035  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725167  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725232  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.731054  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:40.741277  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:40.751649  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757538  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757621  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.763574  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:40.773786  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:40.784152  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790448  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790529  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.796689  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:40.806968  255215 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:40.811858  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:40.818172  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:40.824439  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:40.830588  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:40.836734  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:40.842857  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:40.849072  255215 kubeadm.go:404] StartCluster: {Name:embed-certs-437183 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:40.849208  255215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:40.849269  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:40.882040  255215 cri.go:89] found id: ""
	I0817 22:24:40.882132  255215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:40.893833  255215 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:40.893859  255215 kubeadm.go:636] restartCluster start
	I0817 22:24:40.893926  255215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:40.906498  255215 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.907768  255215 kubeconfig.go:92] found "embed-certs-437183" server: "https://192.168.39.186:8443"
	I0817 22:24:40.910282  255215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:40.921945  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.922021  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.933335  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.933360  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.933417  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.944168  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.444996  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.445109  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.457502  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.944752  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.944881  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.960929  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.444350  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.444464  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.461555  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.066927  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.067043  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.082831  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.567259  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.567347  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.581544  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.067112  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.067211  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.078859  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.566916  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.567075  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.582637  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.067188  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.067286  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.082771  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.567236  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.567331  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.583192  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.067806  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.067953  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.082962  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.567559  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.567664  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.582761  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.067267  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.067357  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.078631  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.567181  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.567299  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.583270  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.509044  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509662  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509688  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:42.509599  256286 retry.go:31] will retry after 2.726859315s: waiting for machine to come up
	I0817 22:24:45.239162  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239727  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239756  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:45.239667  256286 retry.go:31] will retry after 2.868820101s: waiting for machine to come up
	I0817 22:24:42.944983  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.945083  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.960949  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.444415  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.444541  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.460157  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.944659  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.944757  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.960506  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.444408  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.444544  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.460666  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.944252  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.944358  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.956137  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.444667  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.444779  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.460524  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.944710  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.945003  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.961038  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.444556  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.444684  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.459345  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.944760  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.944858  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.961217  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:47.444786  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.444935  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.460748  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.067683  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.067794  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.083038  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.567750  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.567850  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.579427  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.066928  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.067014  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.078671  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.567463  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.567559  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.579377  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.041151  255057 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:45.041202  255057 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:45.041218  255057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:45.041279  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:45.080480  255057 cri.go:89] found id: ""
	I0817 22:24:45.080569  255057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:45.096518  255057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:45.107778  255057 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:45.107880  255057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117115  255057 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117151  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.269517  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.790366  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.988106  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.124121  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.219342  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:46.219438  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.241849  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.795050  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.295314  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.795361  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.111566  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112173  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:48.112079  256286 retry.go:31] will retry after 3.129130141s: waiting for machine to come up
	I0817 22:24:51.245244  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245759  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245788  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:51.245707  256286 retry.go:31] will retry after 4.573749963s: waiting for machine to come up
	I0817 22:24:47.944303  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.944406  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.960613  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.445144  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.445245  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.460221  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.944726  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.944811  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.958575  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.444744  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.444875  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.460348  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.944986  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.945117  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.958396  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.445013  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:50.445110  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:50.459941  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.922423  255215 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:50.922493  255215 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:50.922513  255215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:50.922581  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:50.964064  255215 cri.go:89] found id: ""
	I0817 22:24:50.964154  255215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:50.980513  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:50.990086  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:50.990152  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999907  255215 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999935  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:51.147593  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.150655  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.002996323s)
	I0817 22:24:52.150694  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.367611  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.461186  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.534447  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:52.534547  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:52.551513  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.295087  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.794596  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.817042  255057 api_server.go:72] duration metric: took 2.597699698s to wait for apiserver process to appear ...
	I0817 22:24:48.817069  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:48.817086  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.817615  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:48.817653  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.818012  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:49.318894  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.160567  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.160612  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.160627  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.246065  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.246117  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.318300  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.394871  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.394932  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:52.818493  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.825349  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.825391  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.318277  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.324705  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:53.324751  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.818240  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.823823  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:24:53.834528  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:24:53.834573  255057 api_server.go:131] duration metric: took 5.01749639s to wait for apiserver health ...
	I0817 22:24:53.834586  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:53.834596  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:53.836827  255057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:53.838602  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:24:53.850880  255057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:24:53.871556  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:24:53.886793  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:24:53.886858  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:24:53.886875  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:24:53.886889  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:24:53.886902  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:24:53.886922  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:24:53.886939  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:24:53.886948  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:24:53.886961  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:24:53.886975  255057 system_pods.go:74] duration metric: took 15.392207ms to wait for pod list to return data ...
	I0817 22:24:53.886988  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:24:53.891527  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:24:53.891589  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:24:53.891630  255057 node_conditions.go:105] duration metric: took 4.635197ms to run NodePressure ...
	I0817 22:24:53.891656  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:54.230065  255057 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239113  255057 kubeadm.go:787] kubelet initialised
	I0817 22:24:54.239146  255057 kubeadm.go:788] duration metric: took 9.048225ms waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239159  255057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:54.251454  255057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.266584  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266619  255057 pod_ready.go:81] duration metric: took 15.127554ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.266633  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266645  255057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.278901  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278932  255057 pod_ready.go:81] duration metric: took 12.266962ms waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.278944  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278952  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.297982  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298020  255057 pod_ready.go:81] duration metric: took 19.058778ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.298032  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298047  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.309929  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309967  255057 pod_ready.go:81] duration metric: took 11.898508ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.309980  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309991  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.676448  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676495  255057 pod_ready.go:81] duration metric: took 366.48994ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.676507  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676547  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.078351  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078392  255057 pod_ready.go:81] duration metric: took 401.831269ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.078405  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078416  255057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.476059  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476101  255057 pod_ready.go:81] duration metric: took 397.677369ms waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.476111  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476121  255057 pod_ready.go:38] duration metric: took 1.236947103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:55.476143  255057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:24:55.487413  255057 ops.go:34] apiserver oom_adj: -16
	I0817 22:24:55.487448  255057 kubeadm.go:640] restartCluster took 20.471686915s
	I0817 22:24:55.487459  255057 kubeadm.go:406] StartCluster complete in 20.530629906s
	I0817 22:24:55.487482  255057 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.487591  255057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:24:55.489799  255057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.490091  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:24:55.490202  255057 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:24:55.490349  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:55.490375  255057 addons.go:69] Setting storage-provisioner=true in profile "no-preload-525875"
	I0817 22:24:55.490380  255057 addons.go:69] Setting metrics-server=true in profile "no-preload-525875"
	I0817 22:24:55.490397  255057 addons.go:231] Setting addon storage-provisioner=true in "no-preload-525875"
	I0817 22:24:55.490404  255057 addons.go:231] Setting addon metrics-server=true in "no-preload-525875"
	W0817 22:24:55.490409  255057 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:24:55.490435  255057 addons.go:69] Setting default-storageclass=true in profile "no-preload-525875"
	I0817 22:24:55.490465  255057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-525875"
	I0817 22:24:55.490474  255057 host.go:66] Checking if "no-preload-525875" exists ...
	W0817 22:24:55.490413  255057 addons.go:240] addon metrics-server should already be in state true
	I0817 22:24:55.490547  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.491607  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.491742  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492181  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492232  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492255  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492291  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.503335  255057 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-525875" context rescaled to 1 replicas
	I0817 22:24:55.503399  255057 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:24:55.505836  255057 out.go:177] * Verifying Kubernetes components...
	I0817 22:24:55.507438  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:24:55.512841  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0817 22:24:55.513126  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0817 22:24:55.513241  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42607
	I0817 22:24:55.513441  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513567  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513770  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.514042  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514082  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514128  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514159  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514577  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514595  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514708  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514733  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514804  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.515081  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.515186  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515223  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.515651  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515699  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.532135  255057 addons.go:231] Setting addon default-storageclass=true in "no-preload-525875"
	W0817 22:24:55.532171  255057 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:24:55.532205  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.532614  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.532665  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.535464  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I0817 22:24:55.537205  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:24:55.537544  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.537676  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.538005  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538022  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538197  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538209  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538328  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538574  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538694  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.538757  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.540907  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.541221  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.543481  255057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:55.545233  255057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:24:55.820955  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.821534  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Found IP for machine: 192.168.50.30
	I0817 22:24:55.821557  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserving static IP address...
	I0817 22:24:55.821590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has current primary IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.822134  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.822169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | skip adding static IP to network mk-default-k8s-diff-port-321287 - found existing host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"}
	I0817 22:24:55.822189  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Getting to WaitForSSH function...
	I0817 22:24:55.822212  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserved static IP address: 192.168.50.30
	I0817 22:24:55.822225  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for SSH to be available...
	I0817 22:24:55.825198  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.825630  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825769  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH client type: external
	I0817 22:24:55.825802  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa (-rw-------)
	I0817 22:24:55.825837  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:55.825855  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | About to run SSH command:
	I0817 22:24:55.825874  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | exit 0
	I0817 22:24:55.923224  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:55.923669  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetConfigRaw
	I0817 22:24:55.924434  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:55.927453  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.927935  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.927987  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.928304  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:24:55.928581  255491 machine.go:88] provisioning docker machine ...
	I0817 22:24:55.928610  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:55.928818  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.928963  255491 buildroot.go:166] provisioning hostname "default-k8s-diff-port-321287"
	I0817 22:24:55.928984  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.929169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:55.931672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932179  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.932213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:55.932606  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.932862  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.933008  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:55.933228  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:55.933895  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:55.933917  255491 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-321287 && echo "default-k8s-diff-port-321287" | sudo tee /etc/hostname
	I0817 22:24:56.066560  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-321287
	
	I0817 22:24:56.066599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.070072  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070509  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.070590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070901  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.071175  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071377  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071589  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.071813  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.072479  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.072511  255491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-321287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-321287/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-321287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:56.210857  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:56.210897  255491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:56.210954  255491 buildroot.go:174] setting up certificates
	I0817 22:24:56.210968  255491 provision.go:83] configureAuth start
	I0817 22:24:56.210981  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:56.211435  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:56.214305  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214711  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.214762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214931  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.217766  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218200  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.218245  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218444  255491 provision.go:138] copyHostCerts
	I0817 22:24:56.218519  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:56.218533  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:56.218609  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:56.218728  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:56.218738  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:56.218769  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:56.218846  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:56.218856  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:56.218886  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:56.218953  255491 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-321287 san=[192.168.50.30 192.168.50.30 localhost 127.0.0.1 minikube default-k8s-diff-port-321287]
	I0817 22:24:56.289985  255491 provision.go:172] copyRemoteCerts
	I0817 22:24:56.290068  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:56.290104  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.293536  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.293996  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.294027  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.294218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.294456  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.294675  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.294866  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.386746  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:56.413448  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 22:24:56.438758  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 22:24:56.467489  255491 provision.go:86] duration metric: configureAuth took 256.504259ms
	I0817 22:24:56.467525  255491 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:56.467792  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:56.467917  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.470870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.471373  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471601  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.471839  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472048  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.472441  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.473139  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.473162  255491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:57.100503  254975 start.go:369] acquired machines lock for "old-k8s-version-294781" in 57.735745135s
	I0817 22:24:57.100571  254975 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:57.100583  254975 fix.go:54] fixHost starting: 
	I0817 22:24:57.101120  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:57.101172  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:57.121393  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0817 22:24:57.122017  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:57.122807  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:24:57.122834  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:57.123289  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:57.123463  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:24:57.123584  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:24:57.125545  254975 fix.go:102] recreateIfNeeded on old-k8s-version-294781: state=Stopped err=<nil>
	I0817 22:24:57.125580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	W0817 22:24:57.125759  254975 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:57.127853  254975 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-294781" ...
	I0817 22:24:55.546816  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:24:55.546839  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:24:55.546870  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.545324  255057 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.546955  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:24:55.546971  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.551364  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552354  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552580  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
	I0817 22:24:55.552920  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.552950  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553052  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.553160  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553171  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.553238  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553408  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553592  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553747  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553751  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553805  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.553823  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.553914  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553952  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554237  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.554648  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554839  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.554878  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.594781  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0817 22:24:55.595253  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.595928  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.595955  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.596358  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.596659  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.598866  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.599111  255057 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.599123  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:24:55.599141  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.602520  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.602895  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.602924  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.603114  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.603334  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.603537  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.603678  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.693508  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:24:55.693535  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:24:55.720303  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.739691  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:24:55.739725  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:24:55.752809  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.793480  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:55.793512  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:24:55.805075  255057 node_ready.go:35] waiting up to 6m0s for node "no-preload-525875" to be "Ready" ...
	I0817 22:24:55.805164  255057 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 22:24:55.834328  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:57.451781  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.731427598s)
	I0817 22:24:57.451824  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.698971636s)
	I0817 22:24:57.451845  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451859  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.451876  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451887  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452756  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.452808  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.452818  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.452832  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.452842  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452965  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453000  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453009  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453019  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453027  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453173  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453247  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453270  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453295  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453306  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453677  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453709  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453720  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.455299  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.455300  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.455325  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.564475  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.730071346s)
	I0817 22:24:57.564539  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.564551  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565087  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565160  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565170  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565185  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.565217  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565483  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565530  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565539  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565550  255057 addons.go:467] Verifying addon metrics-server=true in "no-preload-525875"
	I0817 22:24:57.569420  255057 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:24:53.063998  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:53.564081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.064081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.564321  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.064476  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.090168  255215 api_server.go:72] duration metric: took 2.555721263s to wait for apiserver process to appear ...
	I0817 22:24:55.090200  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:55.090223  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:57.571712  255057 addons.go:502] enable addons completed in 2.081503451s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:24:57.882753  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:56.835353  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:56.835388  255491 machine.go:91] provisioned docker machine in 906.787255ms
	I0817 22:24:56.835401  255491 start.go:300] post-start starting for "default-k8s-diff-port-321287" (driver="kvm2")
	I0817 22:24:56.835415  255491 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:56.835460  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:56.835881  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:56.835925  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.838868  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839240  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.839274  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839366  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.839581  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.839808  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.839994  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.932979  255491 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:56.937642  255491 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:56.937675  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:56.937770  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:56.937877  255491 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:56.938003  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:56.949478  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:56.975557  255491 start.go:303] post-start completed in 140.136722ms
	I0817 22:24:56.975589  255491 fix.go:56] fixHost completed within 23.488019817s
	I0817 22:24:56.975618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.979039  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979486  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.979549  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979673  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.979951  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980152  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980301  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.980507  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.981194  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.981211  255491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:57.100308  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311097.042275817
	
	I0817 22:24:57.100341  255491 fix.go:206] guest clock: 1692311097.042275817
	I0817 22:24:57.100351  255491 fix.go:219] Guest: 2023-08-17 22:24:57.042275817 +0000 UTC Remote: 2023-08-17 22:24:56.975593678 +0000 UTC m=+280.298176937 (delta=66.682139ms)
	I0817 22:24:57.100389  255491 fix.go:190] guest clock delta is within tolerance: 66.682139ms
	I0817 22:24:57.100396  255491 start.go:83] releasing machines lock for "default-k8s-diff-port-321287", held for 23.61286841s
	I0817 22:24:57.100436  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.100813  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:57.104312  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.104719  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.104807  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.105050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105744  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105949  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.106081  255491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:57.106133  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.106268  255491 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:57.106395  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.110145  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110531  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.110577  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.111166  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.111352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.111402  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.111567  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.112700  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.112751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.112980  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.113206  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.113379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.113534  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.200530  255491 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:57.232758  255491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:57.405574  255491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:57.413543  255491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:57.413637  255491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:57.438687  255491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:57.438718  255491 start.go:466] detecting cgroup driver to use...
	I0817 22:24:57.438808  255491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:57.458572  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:57.475320  255491 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:57.475397  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:57.493585  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:57.512274  255491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:57.650975  255491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:57.788299  255491 docker.go:212] disabling docker service ...
	I0817 22:24:57.788395  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:57.806350  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:57.819894  255491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:57.966925  255491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:58.088274  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:58.107210  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:58.129691  255491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:58.129766  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.141217  255491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:58.141388  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.153376  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.166177  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.177326  255491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:58.191627  255491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:58.203913  255491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:58.204001  255491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:58.222901  255491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:58.233280  255491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:58.366794  255491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:58.603364  255491 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:58.603462  255491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:58.616285  255491 start.go:534] Will wait 60s for crictl version
	I0817 22:24:58.616397  255491 ssh_runner.go:195] Run: which crictl
	I0817 22:24:58.622933  255491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:58.668866  255491 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:58.668961  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.735680  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.800442  255491 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:59.550327  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.550367  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:59.550385  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:59.646890  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.646928  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:00.147486  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.160700  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.160745  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:00.647077  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.685626  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.685678  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.147134  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.156042  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:01.156083  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.647569  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.657291  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:25:01.686204  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:01.686260  255215 api_server.go:131] duration metric: took 6.59605111s to wait for apiserver health ...
	I0817 22:25:01.686274  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:25:01.686283  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:01.688856  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:58.802321  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:58.806172  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.806661  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:58.806696  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.807029  255491 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:58.813045  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:58.830937  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:58.831008  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:58.880355  255491 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:58.880469  255491 ssh_runner.go:195] Run: which lz4
	I0817 22:24:58.886729  255491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:58.893418  255491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:58.893496  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:25:01.093233  255491 crio.go:444] Took 2.206544 seconds to copy over tarball
	I0817 22:25:01.093422  255491 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:57.129390  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Start
	I0817 22:24:57.134160  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring networks are active...
	I0817 22:24:57.134190  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network default is active
	I0817 22:24:57.134205  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network mk-old-k8s-version-294781 is active
	I0817 22:24:57.134214  254975 main.go:141] libmachine: (old-k8s-version-294781) Getting domain xml...
	I0817 22:24:57.134228  254975 main.go:141] libmachine: (old-k8s-version-294781) Creating domain...
	I0817 22:24:58.694125  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting to get IP...
	I0817 22:24:58.695714  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:58.696209  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:58.696356  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:58.696219  256493 retry.go:31] will retry after 307.640559ms: waiting for machine to come up
	I0817 22:24:59.006214  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.008497  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.008536  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.006931  256493 retry.go:31] will retry after 316.904618ms: waiting for machine to come up
	I0817 22:24:59.325929  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.326634  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.326672  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.326593  256493 retry.go:31] will retry after 466.068046ms: waiting for machine to come up
	I0817 22:24:59.794718  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.795268  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.795294  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.795200  256493 retry.go:31] will retry after 399.064857ms: waiting for machine to come up
	I0817 22:25:00.196015  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.196733  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.196760  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.196632  256493 retry.go:31] will retry after 553.183294ms: waiting for machine to come up
	I0817 22:25:00.751687  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.752341  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.752366  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.752283  256493 retry.go:31] will retry after 815.149471ms: waiting for machine to come up
	I0817 22:25:01.568847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:01.569679  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:01.569709  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:01.569547  256493 retry.go:31] will retry after 827.38414ms: waiting for machine to come up
	I0817 22:25:01.690788  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:01.726335  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:01.804837  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:01.844074  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:01.844121  255215 system_pods.go:61] "coredns-5d78c9869d-twvdv" [f8305fa5-f0e7-4090-af8f-a9eefe00be65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:01.844134  255215 system_pods.go:61] "etcd-embed-certs-437183" [409212ae-25eb-4221-b380-d73562531eb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:01.844143  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [a378c1e7-c439-427f-b56e-7aeb2397dda2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:01.844149  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [7d8c33ff-f8bd-4ca8-a1cd-7e03a3c1ea55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:01.844156  255215 system_pods.go:61] "kube-proxy-tqlkl" [3dc68d59-da16-4a8e-8664-24c280769e22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:01.844162  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [54addcee-6a78-4a9d-9b15-a02e79ac92be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:01.844169  255215 system_pods.go:61] "metrics-server-74d5c6b9c-h5tt6" [6f8a838b-81d8-444d-aba1-fe46fefe8815] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:01.844175  255215 system_pods.go:61] "storage-provisioner" [65cd2cbe-dcb1-4842-af27-551c8d0a93d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:01.844182  255215 system_pods.go:74] duration metric: took 39.323312ms to wait for pod list to return data ...
	I0817 22:25:01.844194  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:01.857431  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:01.857471  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:01.857485  255215 node_conditions.go:105] duration metric: took 13.285661ms to run NodePressure ...
	I0817 22:25:01.857511  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:02.318085  255215 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329089  255215 kubeadm.go:787] kubelet initialised
	I0817 22:25:02.329122  255215 kubeadm.go:788] duration metric: took 10.998414ms waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329133  255215 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.338233  255215 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:59.891549  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.386499  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.889146  255057 node_ready.go:49] node "no-preload-525875" has status "Ready":"True"
	I0817 22:25:02.889193  255057 node_ready.go:38] duration metric: took 7.084075756s waiting for node "no-preload-525875" to be "Ready" ...
	I0817 22:25:02.889209  255057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.915138  255057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926622  255057 pod_ready.go:92] pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:02.926662  255057 pod_ready.go:81] duration metric: took 11.479543ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926677  255057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.597215  255491 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.503742232s)
	I0817 22:25:04.597254  255491 crio.go:451] Took 3.503924 seconds to extract the tarball
	I0817 22:25:04.597269  255491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:04.640799  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:04.683452  255491 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:25:04.683478  255491 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:25:04.683564  255491 ssh_runner.go:195] Run: crio config
	I0817 22:25:04.755546  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:04.755579  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:04.755618  255491 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:04.755646  255491 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8444 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-321287 NodeName:default-k8s-diff-port-321287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:25:04.755865  255491 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-321287"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:04.755964  255491 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-321287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0817 22:25:04.756040  255491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:25:04.768800  255491 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:04.768884  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:04.779179  255491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0817 22:25:04.798848  255491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:04.818088  255491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0817 22:25:04.839021  255491 ssh_runner.go:195] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:04.843996  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:04.858954  255491 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287 for IP: 192.168.50.30
	I0817 22:25:04.858992  255491 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:04.859193  255491 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:04.859263  255491 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:04.859371  255491 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/client.key
	I0817 22:25:04.859452  255491 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key.2a920f45
	I0817 22:25:04.859519  255491 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key
	I0817 22:25:04.859673  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:04.859717  255491 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:04.859733  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:04.859766  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:04.859800  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:04.859839  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:04.859901  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:04.860739  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:04.893191  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:25:04.923817  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:04.953192  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:25:04.985353  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:05.015743  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:05.043565  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:05.072283  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:05.102360  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:05.131090  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:05.158164  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:05.183921  255491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:05.201231  255491 ssh_runner.go:195] Run: openssl version
	I0817 22:25:05.207477  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:05.218696  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224473  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224551  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.230753  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:05.244810  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:05.255480  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.260972  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.261054  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.267724  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:05.280466  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:05.291975  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298403  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298519  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.306541  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:05.318878  255491 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:05.324755  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:05.333167  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:05.341869  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:05.350173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:05.357173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:05.364289  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:05.372301  255491 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-
k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:05.372435  255491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:05.372493  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:05.409127  255491 cri.go:89] found id: ""
	I0817 22:25:05.409211  255491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:05.420288  255491 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:05.420316  255491 kubeadm.go:636] restartCluster start
	I0817 22:25:05.420401  255491 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:05.431336  255491 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.433035  255491 kubeconfig.go:92] found "default-k8s-diff-port-321287" server: "https://192.168.50.30:8444"
	I0817 22:25:05.437153  255491 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:05.446894  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.446956  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.459319  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.459353  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.459412  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.472543  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.973294  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.973386  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.986474  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.473007  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.473141  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.485870  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:02.398531  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:02.399142  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:02.399174  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:02.399045  256493 retry.go:31] will retry after 1.143040413s: waiting for machine to come up
	I0817 22:25:03.543421  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:03.544040  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:03.544076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:03.543971  256493 retry.go:31] will retry after 1.654291601s: waiting for machine to come up
	I0817 22:25:05.200880  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:05.201405  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:05.201435  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:05.201350  256493 retry.go:31] will retry after 1.752048888s: waiting for machine to come up
	I0817 22:25:04.379203  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.872822  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:04.499009  255057 pod_ready.go:92] pod "etcd-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.499040  255057 pod_ready.go:81] duration metric: took 1.572354603s waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.499057  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761691  255057 pod_ready.go:92] pod "kube-apiserver-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.761719  255057 pod_ready.go:81] duration metric: took 262.653075ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761734  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769937  255057 pod_ready.go:92] pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.769968  255057 pod_ready.go:81] duration metric: took 8.225874ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769983  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881406  255057 pod_ready.go:92] pod "kube-proxy-pzpk2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.881444  255057 pod_ready.go:81] duration metric: took 111.452654ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881461  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643623  255057 pod_ready.go:92] pod "kube-scheduler-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:05.643648  255057 pod_ready.go:81] duration metric: took 762.178998ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643658  255057 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:07.695130  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.972803  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.972898  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.985259  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.473416  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.473551  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.485378  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.973567  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.973708  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.989454  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.472762  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.472894  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.489910  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.972732  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.972822  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.984958  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.473569  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.473709  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.490412  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.972908  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.972987  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.986072  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.473333  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.473429  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.485656  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.973314  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.973423  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.989391  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:11.472953  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.473077  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.485192  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.956350  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:06.956874  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:06.956904  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:06.956830  256493 retry.go:31] will retry after 2.09338178s: waiting for machine to come up
	I0817 22:25:09.052006  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:09.052516  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:09.052549  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:09.052447  256493 retry.go:31] will retry after 3.023234706s: waiting for machine to come up
	I0817 22:25:08.877674  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:09.370723  255215 pod_ready.go:92] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:09.370754  255215 pod_ready.go:81] duration metric: took 7.032445075s waiting for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:09.370767  255215 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893038  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:10.893076  255215 pod_ready.go:81] duration metric: took 1.522300039s waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893091  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918300  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:11.918330  255215 pod_ready.go:81] duration metric: took 1.025229003s waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918347  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.192198  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:12.692398  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:11.973001  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.973083  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.984794  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.473426  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.473527  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.489566  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.972736  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.972840  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.984972  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.473572  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.473665  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.485760  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.972804  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.972952  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.984788  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.473423  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.473501  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.484892  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.973394  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.973481  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.985492  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:15.447933  255491 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:15.447967  255491 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:15.447983  255491 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:15.448044  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:15.483471  255491 cri.go:89] found id: ""
	I0817 22:25:15.483596  255491 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:15.500292  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:15.510630  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:15.510695  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520738  255491 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520771  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:15.635683  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:12.079485  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:12.080041  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:12.080069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:12.079986  256493 retry.go:31] will retry after 4.097355523s: waiting for machine to come up
	I0817 22:25:16.178550  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:16.179032  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:16.179063  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:16.178988  256493 retry.go:31] will retry after 4.178327275s: waiting for machine to come up
	I0817 22:25:14.176089  255215 pod_ready.go:102] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:14.679850  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.679881  255215 pod_ready.go:81] duration metric: took 2.761525031s waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.679894  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685308  255215 pod_ready.go:92] pod "kube-proxy-tqlkl" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.685339  255215 pod_ready.go:81] duration metric: took 5.435708ms waiting for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685352  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967073  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.967099  255215 pod_ready.go:81] duration metric: took 281.740411ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967110  255215 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:17.277033  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:15.190295  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:17.193522  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:16.723896  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0881723s)
	I0817 22:25:16.723933  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:16.940953  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.025208  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.110784  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:17.110880  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.123610  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.645363  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.145697  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.645211  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.145515  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.645764  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.665892  255491 api_server.go:72] duration metric: took 2.555110324s to wait for apiserver process to appear ...
	I0817 22:25:19.665920  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:19.665938  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:20.359726  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360375  254975 main.go:141] libmachine: (old-k8s-version-294781) Found IP for machine: 192.168.72.56
	I0817 22:25:20.360408  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserving static IP address...
	I0817 22:25:20.360426  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has current primary IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360798  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserved static IP address: 192.168.72.56
	I0817 22:25:20.360843  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.360866  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting for SSH to be available...
	I0817 22:25:20.360898  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | skip adding static IP to network mk-old-k8s-version-294781 - found existing host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"}
	I0817 22:25:20.360918  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Getting to WaitForSSH function...
	I0817 22:25:20.363319  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.363721  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.363767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.364016  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH client type: external
	I0817 22:25:20.364069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa (-rw-------)
	I0817 22:25:20.364115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:25:20.364135  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | About to run SSH command:
	I0817 22:25:20.364175  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | exit 0
	I0817 22:25:20.454327  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | SSH cmd err, output: <nil>: 
	I0817 22:25:20.454772  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetConfigRaw
	I0817 22:25:20.455585  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.458846  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.459420  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459910  254975 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/config.json ...
	I0817 22:25:20.460207  254975 machine.go:88] provisioning docker machine ...
	I0817 22:25:20.460240  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:20.460489  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460712  254975 buildroot.go:166] provisioning hostname "old-k8s-version-294781"
	I0817 22:25:20.460743  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460912  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.463811  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464166  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.464216  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464391  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.464610  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464779  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464936  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.465157  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.465566  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.465578  254975 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-294781 && echo "old-k8s-version-294781" | sudo tee /etc/hostname
	I0817 22:25:20.604184  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-294781
	
	I0817 22:25:20.604223  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.607313  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.607668  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.607706  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.608091  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.608335  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608511  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608656  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.608845  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.609344  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.609368  254975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-294781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-294781/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-294781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:25:20.731574  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:25:20.731639  254975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:25:20.731679  254975 buildroot.go:174] setting up certificates
	I0817 22:25:20.731697  254975 provision.go:83] configureAuth start
	I0817 22:25:20.731717  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.732057  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.735344  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.735748  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.735780  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.736038  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.738896  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739346  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.739384  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739562  254975 provision.go:138] copyHostCerts
	I0817 22:25:20.739634  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:25:20.739650  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:25:20.739733  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:25:20.739875  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:25:20.739889  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:25:20.739921  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:25:20.740027  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:25:20.740040  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:25:20.740069  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:25:20.740159  254975 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-294781 san=[192.168.72.56 192.168.72.56 localhost 127.0.0.1 minikube old-k8s-version-294781]
	I0817 22:25:20.937408  254975 provision.go:172] copyRemoteCerts
	I0817 22:25:20.937480  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:25:20.937508  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.940609  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941074  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.941115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941294  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.941469  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.941678  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.941899  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.033976  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:25:21.062438  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 22:25:21.090325  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:25:21.116263  254975 provision.go:86] duration metric: configureAuth took 384.54455ms
	I0817 22:25:21.116295  254975 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:25:21.116550  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:25:21.116667  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.119767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120295  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.120351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.120735  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.120898  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.121114  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.121330  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.121982  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.122011  254975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:25:21.449644  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:25:21.449675  254975 machine.go:91] provisioned docker machine in 989.449203ms
	I0817 22:25:21.449686  254975 start.go:300] post-start starting for "old-k8s-version-294781" (driver="kvm2")
	I0817 22:25:21.449696  254975 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:25:21.449713  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.450065  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:25:21.450112  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.453436  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.453847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.453893  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.454092  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.454320  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.454501  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.454682  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.544501  254975 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:25:21.549102  254975 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:25:21.549128  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:25:21.549201  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:25:21.549301  254975 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:25:21.549425  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:25:21.559169  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:21.585459  254975 start.go:303] post-start completed in 135.754284ms
	I0817 22:25:21.585496  254975 fix.go:56] fixHost completed within 24.48491231s
	I0817 22:25:21.585531  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.588650  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589045  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.589076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589236  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.589445  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589638  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589810  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.590026  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.590596  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.590621  254975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:25:21.704138  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311121.622295369
	
	I0817 22:25:21.704162  254975 fix.go:206] guest clock: 1692311121.622295369
	I0817 22:25:21.704170  254975 fix.go:219] Guest: 2023-08-17 22:25:21.622295369 +0000 UTC Remote: 2023-08-17 22:25:21.585502401 +0000 UTC m=+364.810906249 (delta=36.792968ms)
	I0817 22:25:21.704193  254975 fix.go:190] guest clock delta is within tolerance: 36.792968ms
	I0817 22:25:21.704200  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 24.603659499s
	I0817 22:25:21.704228  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.704524  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:21.707198  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707512  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.707555  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707715  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708285  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708516  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708605  254975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:25:21.708670  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.708790  254975 ssh_runner.go:195] Run: cat /version.json
	I0817 22:25:21.708816  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.711462  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711744  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711858  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.711906  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712090  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712154  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.712219  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712326  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712347  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712539  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712541  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712749  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712766  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.712936  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:19.775731  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.777036  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:19.693695  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:22.189616  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.818518  254975 ssh_runner.go:195] Run: systemctl --version
	I0817 22:25:21.824498  254975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:25:21.971461  254975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:25:21.978188  254975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:25:21.978271  254975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:25:21.993704  254975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:25:21.993738  254975 start.go:466] detecting cgroup driver to use...
	I0817 22:25:21.993820  254975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:25:22.009074  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:25:22.022874  254975 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:25:22.022935  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:25:22.036508  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:25:22.050919  254975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:25:22.174894  254975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:25:22.307776  254975 docker.go:212] disabling docker service ...
	I0817 22:25:22.307863  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:25:22.322017  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:25:22.334550  254975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:25:22.439721  254975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:25:22.554591  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:25:22.570460  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:25:22.588685  254975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0817 22:25:22.588767  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.599716  254975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:25:22.599801  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.611990  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.623873  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.636093  254975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:25:22.647438  254975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:25:22.657266  254975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:25:22.657338  254975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:25:22.672463  254975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:25:22.683508  254975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:25:22.799912  254975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:25:22.995704  254975 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:25:22.995816  254975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:25:23.003199  254975 start.go:534] Will wait 60s for crictl version
	I0817 22:25:23.003280  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:23.008350  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:25:23.042651  254975 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:25:23.042763  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.093624  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.142140  254975 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0817 22:25:24.666188  255491 api_server.go:269] stopped: https://192.168.50.30:8444/healthz: Get "https://192.168.50.30:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:24.666264  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:24.903729  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:24.903775  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:25.404125  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.420215  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.420261  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:25.903943  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.914463  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.914514  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:26.403966  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:26.414021  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:25:26.437708  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:26.437750  255491 api_server.go:131] duration metric: took 6.771821605s to wait for apiserver health ...
	I0817 22:25:26.437779  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:26.437789  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:26.440095  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:26.441921  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:26.469640  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:26.514785  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:26.532553  255491 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:26.532616  255491 system_pods.go:61] "coredns-5d78c9869d-v74x9" [1c42e9be-16fa-47c2-ab04-9ec805320760] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:26.532631  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [a3655572-9d89-4ef6-85db-85dc454d1021] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:26.532659  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [6786ac16-78df-4909-8542-0952af5beff6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:26.532675  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [ac8085d0-db9c-4229-b816-4753b7cfcae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:26.532686  255491 system_pods.go:61] "kube-proxy-4d9dx" [22447888-6570-47b7-baac-a5842688de9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:26.532697  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [bfcfc726-e659-4cb9-ad36-9887ddfaf170] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:26.532713  255491 system_pods.go:61] "metrics-server-74d5c6b9c-25l6w" [205dcf88-9d10-416b-8fd0-c93939208c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:26.532722  255491 system_pods.go:61] "storage-provisioner" [be486251-ebb9-4d0b-85c9-fe04e76634e3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:26.532738  255491 system_pods.go:74] duration metric: took 17.92531ms to wait for pod list to return data ...
	I0817 22:25:26.532751  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:26.541133  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:26.541180  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:26.541197  255491 node_conditions.go:105] duration metric: took 8.431415ms to run NodePressure ...
	I0817 22:25:26.541228  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:23.143729  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:23.146678  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147145  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:23.147178  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147433  254975 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0817 22:25:23.151860  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:23.165714  254975 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 22:25:23.165805  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:23.207234  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:23.207334  254975 ssh_runner.go:195] Run: which lz4
	I0817 22:25:23.211497  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:25:23.216272  254975 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:25:23.216309  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0817 22:25:25.170164  254975 crio.go:444] Took 1.958697 seconds to copy over tarball
	I0817 22:25:25.170253  254975 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:25:23.792764  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.276276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:24.193719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.692837  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.873863  255491 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:26.878982  255491 kubeadm.go:787] kubelet initialised
	I0817 22:25:26.879005  255491 kubeadm.go:788] duration metric: took 5.10797ms waiting for restarted kubelet to initialise ...
	I0817 22:25:26.879014  255491 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:26.885772  255491 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:29.448692  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:28.464409  254975 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.294096057s)
	I0817 22:25:28.464448  254975 crio.go:451] Took 3.294247 seconds to extract the tarball
	I0817 22:25:28.464461  254975 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:28.505546  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:28.550245  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:28.550282  254975 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:25:28.550393  254975 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.550419  254975 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.550425  254975 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.550466  254975 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.550416  254975 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.550388  254975 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.550543  254975 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0817 22:25:28.550382  254975 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551670  254975 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551673  254975 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.551765  254975 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.551779  254975 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.551793  254975 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0817 22:25:28.551814  254975 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.551841  254975 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.552852  254975 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.736900  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.746950  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.747215  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.749256  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.754813  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0817 22:25:28.767639  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.778459  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.834796  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.845176  254975 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0817 22:25:28.845233  254975 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.845295  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.896784  254975 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0817 22:25:28.896843  254975 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.896901  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919129  254975 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0817 22:25:28.919247  254975 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.919192  254975 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0817 22:25:28.919301  254975 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.919320  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919332  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972779  254975 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0817 22:25:28.972831  254975 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0817 22:25:28.972863  254975 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0817 22:25:28.972898  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972901  254975 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.973013  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.986909  254975 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0817 22:25:28.986957  254975 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.987007  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:29.083047  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:29.083137  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:29.083204  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:29.083276  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0817 22:25:29.083227  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0817 22:25:29.083354  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:29.083408  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:29.214678  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0817 22:25:29.214743  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0817 22:25:29.214777  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0817 22:25:29.214847  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0817 22:25:29.214934  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.221086  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0817 22:25:29.221101  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0817 22:25:29.221162  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0817 22:25:29.223655  254975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0817 22:25:29.223684  254975 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.223753  254975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0817 22:25:30.774685  254975 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550895846s)
	I0817 22:25:30.774722  254975 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0817 22:25:30.774776  254975 cache_images.go:92] LoadImages completed in 2.224475745s
	W0817 22:25:30.774942  254975 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0817 22:25:30.775051  254975 ssh_runner.go:195] Run: crio config
	I0817 22:25:30.840592  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:30.840623  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:30.840650  254975 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:30.840680  254975 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.56 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-294781 NodeName:old-k8s-version-294781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0817 22:25:30.840917  254975 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-294781"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-294781
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.56:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:30.841030  254975 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-294781 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:25:30.841111  254975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0817 22:25:30.850719  254975 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:30.850818  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:30.862807  254975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0817 22:25:30.882111  254975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:30.900496  254975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0817 22:25:30.921163  254975 ssh_runner.go:195] Run: grep 192.168.72.56	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:30.925789  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:30.941284  254975 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781 for IP: 192.168.72.56
	I0817 22:25:30.941335  254975 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:30.941556  254975 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:30.941617  254975 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:30.941728  254975 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/client.key
	I0817 22:25:30.941792  254975 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key.aa8f9bd0
	I0817 22:25:30.941827  254975 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key
	I0817 22:25:30.941948  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:30.941994  254975 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:30.942005  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:30.942039  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:30.942107  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:30.942141  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:30.942200  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:30.942953  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:30.973814  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:25:31.003939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:31.035137  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:25:31.063172  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:31.092059  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:31.120881  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:31.148113  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:31.175102  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:31.204939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:31.231548  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:31.263908  254975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:31.287143  254975 ssh_runner.go:195] Run: openssl version
	I0817 22:25:31.293380  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:31.307058  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313520  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313597  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.321182  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:31.332412  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:31.343318  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.348972  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.349044  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.355568  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:31.366257  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:31.376489  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382818  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382919  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.390171  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:31.400360  254975 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:31.406177  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:31.413881  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:31.422198  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:31.429468  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:31.437072  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:31.444150  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:31.450952  254975 kubeadm.go:404] StartCluster: {Name:old-k8s-version-294781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version
-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:31.451064  254975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:31.451140  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:31.489009  254975 cri.go:89] found id: ""
	I0817 22:25:31.489098  254975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:31.499098  254975 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:31.499126  254975 kubeadm.go:636] restartCluster start
	I0817 22:25:31.499191  254975 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:31.510909  254975 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.512049  254975 kubeconfig.go:92] found "old-k8s-version-294781" server: "https://192.168.72.56:8443"
	I0817 22:25:31.514634  254975 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:31.525968  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.526039  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.539397  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.539423  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.539485  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.552492  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:28.276789  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:30.406349  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:29.190524  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.195732  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.919929  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.415784  255491 pod_ready.go:92] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:32.415817  255491 pod_ready.go:81] duration metric: took 5.530013816s waiting for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:32.415840  255491 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:34.435177  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.435405  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.053512  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.053604  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.065409  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.553555  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.553647  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.566402  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.052703  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.052785  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.069027  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.552583  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.552724  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.566692  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.053418  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.053493  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.065794  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.553389  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.553490  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.566130  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.052663  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.052753  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.065276  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.553446  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.553544  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.567754  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.053326  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.053407  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.066562  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.553098  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.553200  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.564869  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.777224  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:35.273781  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.276847  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:33.690890  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.190746  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.435673  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.435712  255491 pod_ready.go:81] duration metric: took 5.019858859s waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.435724  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441582  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.441602  255491 pod_ready.go:81] duration metric: took 5.870633ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441614  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448615  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.448643  255491 pod_ready.go:81] duration metric: took 7.021551ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448656  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454742  255491 pod_ready.go:92] pod "kube-proxy-4d9dx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.454768  255491 pod_ready.go:81] duration metric: took 6.104572ms waiting for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454780  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462598  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.462623  255491 pod_ready.go:81] duration metric: took 7.834341ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462637  255491 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:39.741207  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.053213  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.053363  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.065752  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:37.553604  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.553709  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.569278  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.052848  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.052956  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.065011  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.552809  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.552915  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.564702  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.053287  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.053378  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.065004  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.553557  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.553654  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.565776  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.053269  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.053352  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.065089  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.552595  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.552718  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.564921  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.053531  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:41.053617  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:41.065803  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.526724  254975 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:41.526774  254975 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:41.526788  254975 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:41.526858  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:41.560831  254975 cri.go:89] found id: ""
	I0817 22:25:41.560931  254975 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:41.577926  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:41.587081  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:41.587169  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596656  254975 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596690  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:41.716908  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:39.776178  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.275946  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:38.193834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:40.691324  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.692667  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:41.745307  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:44.242440  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.243469  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.840419  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.123468828s)
	I0817 22:25:42.840454  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.062568  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.150374  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.265948  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:43.266043  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.284133  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.804512  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.304041  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.803961  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.828050  254975 api_server.go:72] duration metric: took 1.562100837s to wait for apiserver process to appear ...
	I0817 22:25:44.828085  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:44.828102  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.828570  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:44.828611  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.829005  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:45.329868  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.276477  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.775206  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:45.189460  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:47.690349  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:48.741121  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.742231  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.330553  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:50.330619  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.714219  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.714253  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:51.714268  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.756012  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.756052  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:49.276427  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.775567  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:49.698834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:52.190711  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.829442  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.888999  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:51.889031  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.329747  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.337398  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.337432  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.829817  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.839157  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.839187  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:53.329580  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:53.336858  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:25:53.347151  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:25:53.347191  254975 api_server.go:131] duration metric: took 8.519097199s to wait for apiserver health ...
	I0817 22:25:53.347204  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:53.347212  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:53.349243  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:52.743242  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:55.241261  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:53.350976  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:53.364808  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:53.397606  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:53.411868  254975 system_pods.go:59] 7 kube-system pods found
	I0817 22:25:53.411903  254975 system_pods.go:61] "coredns-5644d7b6d9-nz5d2" [5514f434-2c17-42dc-b35b-fef5bd6886fb] Running
	I0817 22:25:53.411909  254975 system_pods.go:61] "etcd-old-k8s-version-294781" [75919c29-02ae-46f6-8173-507b491d16da] Running
	I0817 22:25:53.411920  254975 system_pods.go:61] "kube-apiserver-old-k8s-version-294781" [f6d458ca-a84f-40dc-8b6a-b53fb8062c50] Running
	I0817 22:25:53.411930  254975 system_pods.go:61] "kube-controller-manager-old-k8s-version-294781" [0827f676-c11c-44b1-9bca-f8f905448490] Pending
	I0817 22:25:53.411937  254975 system_pods.go:61] "kube-proxy-f2bdh" [8b0dfe14-026a-44e1-9c6f-7f16fb61f90e] Running
	I0817 22:25:53.411943  254975 system_pods.go:61] "kube-scheduler-old-k8s-version-294781" [9ced2a30-44a8-421f-94ef-19be20b58c5d] Running
	I0817 22:25:53.411947  254975 system_pods.go:61] "storage-provisioner" [c9c05cca-5426-4071-a408-815c723a76f3] Running
	I0817 22:25:53.411954  254975 system_pods.go:74] duration metric: took 14.318728ms to wait for pod list to return data ...
	I0817 22:25:53.411961  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:53.415672  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:53.415715  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:53.415731  254975 node_conditions.go:105] duration metric: took 3.76549ms to run NodePressure ...
	I0817 22:25:53.415758  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:53.808911  254975 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:53.814276  254975 retry.go:31] will retry after 200.301174ms: kubelet not initialised
	I0817 22:25:54.020423  254975 retry.go:31] will retry after 376.047728ms: kubelet not initialised
	I0817 22:25:54.401967  254975 retry.go:31] will retry after 672.586884ms: kubelet not initialised
	I0817 22:25:55.079229  254975 retry.go:31] will retry after 1.101994757s: kubelet not initialised
	I0817 22:25:56.186236  254975 retry.go:31] will retry after 770.380926ms: kubelet not initialised
	I0817 22:25:53.777865  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.275799  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:54.690880  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.189416  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.242279  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.742604  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.961679  254975 retry.go:31] will retry after 2.235217601s: kubelet not initialised
	I0817 22:25:59.205012  254975 retry.go:31] will retry after 2.063266757s: kubelet not initialised
	I0817 22:26:01.275712  254975 retry.go:31] will retry after 5.105867057s: kubelet not initialised
	I0817 22:25:58.774815  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.275856  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.190180  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.692286  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.744707  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.240683  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.388158  254975 retry.go:31] will retry after 3.608427827s: kubelet not initialised
	I0817 22:26:03.775281  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.274839  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.190713  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.689980  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.742399  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.742739  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.004038  254975 retry.go:31] will retry after 8.940252852s: kubelet not initialised
	I0817 22:26:08.275499  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.275871  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.696436  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:11.189718  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.240363  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.241894  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:12.776238  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.274945  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.690119  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:16.189786  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:17.741982  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:20.242289  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.951040  254975 retry.go:31] will retry after 14.553103306s: kubelet not initialised
	I0817 22:26:17.774269  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:19.775075  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.274390  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.690720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:21.191013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.242355  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.742592  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.275310  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:26.774906  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:23.690032  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:25.690127  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.692342  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.243421  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:29.245714  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:28.777378  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.274134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:30.189730  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:32.689849  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.741791  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.240900  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:36.241988  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:33.521718  254975 kubeadm.go:787] kubelet initialised
	I0817 22:26:33.521745  254975 kubeadm.go:788] duration metric: took 39.712803989s waiting for restarted kubelet to initialise ...
	I0817 22:26:33.521755  254975 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:26:33.535522  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545447  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.545474  254975 pod_ready.go:81] duration metric: took 9.918514ms waiting for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545487  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551823  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.551853  254975 pod_ready.go:81] duration metric: took 6.357251ms waiting for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551867  254975 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559246  254975 pod_ready.go:92] pod "etcd-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.559278  254975 pod_ready.go:81] duration metric: took 7.402957ms waiting for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559291  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565344  254975 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.565373  254975 pod_ready.go:81] duration metric: took 6.072723ms waiting for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565387  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909036  254975 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.909073  254975 pod_ready.go:81] duration metric: took 343.677116ms waiting for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909089  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308592  254975 pod_ready.go:92] pod "kube-proxy-f2bdh" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.308619  254975 pod_ready.go:81] duration metric: took 399.522419ms waiting for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308630  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708489  254975 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.708517  254975 pod_ready.go:81] duration metric: took 399.879822ms waiting for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708528  254975 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.275646  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:35.774730  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.692013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.191914  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.242929  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.741450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.516268  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.275712  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.774133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.690461  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:41.690828  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.242204  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.741216  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:42.016209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.516019  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.275668  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.776837  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.189846  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:46.691439  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.742285  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.241123  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.016817  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.517406  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:48.276244  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.774977  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.189105  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:51.190270  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.241800  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.739978  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.016631  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.515565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.516890  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.274258  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.278000  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.192619  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.693990  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.742737  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.241115  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.241654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.015461  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.017347  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:57.775264  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.775399  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.776382  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:58.190121  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:00.190792  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:02.697428  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.741654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.742940  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.516565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.516966  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:04.275212  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:06.277355  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.190366  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:07.190973  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.244485  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.741985  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.015202  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.016691  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.774384  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.774729  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:09.692011  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.190853  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.742313  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:15.241577  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.514881  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.516950  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.517383  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.774867  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.775482  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.274793  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.689813  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.692012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.243159  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.517518  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.016576  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.275829  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.276653  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.692315  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.189564  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:22.240740  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:24.241960  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.242201  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.017348  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.515756  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.775957  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.275937  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.189646  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.690338  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.690947  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.741912  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.742165  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.516071  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.517838  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.276630  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.775134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.691012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:31.696187  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:33.241142  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:35.243536  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.017452  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.515974  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.516450  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.775448  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.775822  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.274968  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.188369  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.188928  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.741436  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.741983  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.015982  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.516526  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.278879  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.774782  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:38.189378  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:40.695851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:42.240995  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.741178  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.015737  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.018254  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.776276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.276133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.188678  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:45.189618  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:47.191825  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.741669  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.241194  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.242571  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.516687  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.016735  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.277486  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:50.775420  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.689852  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.691216  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.741209  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.743232  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.518209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.016075  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.275443  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.774204  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.692276  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.190072  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.242009  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:00.242183  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.516449  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.016290  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:57.775327  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:59.775642  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.275827  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.691467  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.189998  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.740875  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.742481  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.523305  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.016025  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.275917  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.777604  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.190940  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:05.690559  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.693124  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.241721  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.241889  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:08.017490  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.018815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.274176  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.275009  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.190851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.689465  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.741056  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.241846  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:16.243898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.516550  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.017547  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:13.276368  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.773960  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.690587  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.189824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:18.742657  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.243561  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.515978  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:20.016035  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.774474  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.776240  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.275209  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.194335  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.691142  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:23.743251  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.241450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.021055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.516645  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.776861  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.274029  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.189740  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.691801  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:28.242364  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:30.740610  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.016851  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.017289  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.517096  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.774126  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.275287  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.189744  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.691190  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.741643  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:35.242108  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.015792  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.016247  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.773849  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.777072  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:33.692774  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.189115  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:37.741756  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.244685  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.016815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.017616  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:39.276756  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:41.774190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.190001  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.690824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.742547  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.241354  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.518073  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.016560  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.776627  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:46.275092  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.189166  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.692178  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.697772  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.242829  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.741555  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.516429  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.516588  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:48.775347  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:51.274069  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:50.191415  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.694362  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.242367  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.742705  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.019113  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.516748  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:53.275190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.773511  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.189720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.189811  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.241152  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.242170  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.015866  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.016464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.515901  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.776667  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:00.273941  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.190719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.190988  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.741107  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.742524  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.243093  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.516444  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.017964  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:02.775583  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.280071  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.690586  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.643882  255057 pod_ready.go:81] duration metric: took 4m0.000182343s waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:05.643921  255057 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:05.643932  255057 pod_ready.go:38] duration metric: took 4m2.754707603s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:05.643956  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:29:05.643998  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:05.644060  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:05.703194  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:05.703221  255057 cri.go:89] found id: ""
	I0817 22:29:05.703229  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:05.703283  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.708602  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:05.708676  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:05.747581  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:05.747610  255057 cri.go:89] found id: ""
	I0817 22:29:05.747619  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:05.747692  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.753231  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:05.753331  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:05.795460  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:05.795489  255057 cri.go:89] found id: ""
	I0817 22:29:05.795499  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:05.795562  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.801181  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:05.801268  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:05.840433  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:05.840463  255057 cri.go:89] found id: ""
	I0817 22:29:05.840472  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:05.840546  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.845974  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:05.846039  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:05.886216  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:05.886243  255057 cri.go:89] found id: ""
	I0817 22:29:05.886252  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:05.886314  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.891204  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:05.891286  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:05.927636  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:05.927661  255057 cri.go:89] found id: ""
	I0817 22:29:05.927669  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:05.927732  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.932173  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:05.932230  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:05.963603  255057 cri.go:89] found id: ""
	I0817 22:29:05.963634  255057 logs.go:284] 0 containers: []
	W0817 22:29:05.963646  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:05.963654  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:05.963727  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:05.996465  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:05.996489  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:05.996496  255057 cri.go:89] found id: ""
	I0817 22:29:05.996505  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:05.996572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.001291  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.006314  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:06.006348  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:06.051348  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:06.051386  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:06.226315  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:06.226362  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:06.263289  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:06.263321  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:06.308223  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:06.308262  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:06.346964  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:06.347001  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:06.382834  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:06.382878  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:06.431491  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:06.431527  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:06.485901  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:06.485948  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:07.054256  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:07.054315  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:07.093229  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093417  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093570  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093737  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.119377  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:07.119420  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:07.137712  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:07.137756  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:07.187463  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:07.187511  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:07.252728  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252775  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:07.252844  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:07.252856  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252865  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252872  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252878  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.252884  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252890  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:08.741270  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:11.245029  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:08.516388  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:10.518542  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:07.775391  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:09.775841  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:12.276748  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.741788  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:16.242264  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.018983  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:15.516221  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.774832  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.967926  255215 pod_ready.go:81] duration metric: took 4m0.000797383s waiting for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:14.967968  255215 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:14.967995  255215 pod_ready.go:38] duration metric: took 4m12.638851973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:14.968025  255215 kubeadm.go:640] restartCluster took 4m34.07416066s
	W0817 22:29:14.968112  255215 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:14.968150  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:17.254245  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:29:17.278452  255057 api_server.go:72] duration metric: took 4m21.775005609s to wait for apiserver process to appear ...
	I0817 22:29:17.278488  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:29:17.278540  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:17.278675  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:17.317529  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:17.317554  255057 cri.go:89] found id: ""
	I0817 22:29:17.317562  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:17.317626  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.323505  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:17.323593  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:17.367258  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.367282  255057 cri.go:89] found id: ""
	I0817 22:29:17.367290  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:17.367355  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.372332  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:17.372424  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:17.406884  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:17.406914  255057 cri.go:89] found id: ""
	I0817 22:29:17.406923  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:17.406990  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.411562  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:17.411626  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:17.452516  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.452549  255057 cri.go:89] found id: ""
	I0817 22:29:17.452560  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:17.452654  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.458237  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:17.458327  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:17.498524  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:17.498550  255057 cri.go:89] found id: ""
	I0817 22:29:17.498559  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:17.498621  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.504941  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:17.505024  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:17.543542  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.543570  255057 cri.go:89] found id: ""
	I0817 22:29:17.543580  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:17.543646  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.548420  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:17.548488  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:17.589411  255057 cri.go:89] found id: ""
	I0817 22:29:17.589441  255057 logs.go:284] 0 containers: []
	W0817 22:29:17.589449  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:17.589455  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:17.589520  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:17.624044  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:17.624075  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.624083  255057 cri.go:89] found id: ""
	I0817 22:29:17.624092  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:17.624160  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.631040  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.635336  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:17.635359  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:17.688966  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689294  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689576  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689899  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:17.729861  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:17.729923  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:17.746619  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:17.746663  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.805149  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:17.805198  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.842639  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:17.842673  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.905357  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:17.905406  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.943860  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:17.943893  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:18.242331  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:20.742262  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:17.517585  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:19.519464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:18.114000  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:18.114038  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:18.176549  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:18.176602  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:18.211903  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:18.211947  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:18.246566  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:18.246600  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:18.280810  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:18.280853  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:18.831902  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:18.831957  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:18.883170  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883219  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:18.883304  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:18.883323  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883336  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883352  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883364  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:18.883382  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883391  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:23.242587  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:25.742126  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:22.017269  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:24.017806  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:26.516458  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.241489  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:30.741723  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.516703  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:31.016367  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.884252  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:29:28.889957  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:29:28.891532  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:29:28.891560  255057 api_server.go:131] duration metric: took 11.613062869s to wait for apiserver health ...
	I0817 22:29:28.891571  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:29:28.891602  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:28.891669  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:28.927462  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:28.927496  255057 cri.go:89] found id: ""
	I0817 22:29:28.927506  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:28.927572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.932195  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:28.932284  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:28.974041  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:28.974092  255057 cri.go:89] found id: ""
	I0817 22:29:28.974103  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:28.974172  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.978230  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:28.978302  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:29.012431  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.012459  255057 cri.go:89] found id: ""
	I0817 22:29:29.012469  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:29.012539  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.017232  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:29.017311  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:29.051208  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.051235  255057 cri.go:89] found id: ""
	I0817 22:29:29.051242  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:29.051292  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.056125  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:29.056193  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:29.094165  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.094196  255057 cri.go:89] found id: ""
	I0817 22:29:29.094207  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:29.094277  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.098992  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:29.099054  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:29.138522  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.138552  255057 cri.go:89] found id: ""
	I0817 22:29:29.138561  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:29.138614  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.143075  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:29.143159  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:29.177797  255057 cri.go:89] found id: ""
	I0817 22:29:29.177831  255057 logs.go:284] 0 containers: []
	W0817 22:29:29.177842  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:29.177850  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:29.177916  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:29.208897  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.208922  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.208928  255057 cri.go:89] found id: ""
	I0817 22:29:29.208937  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:29.209008  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.213083  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.217020  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:29.217043  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:29.253559  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253779  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253989  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.254225  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:29.280705  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:29.280746  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:29.295400  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:29.295432  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:29.344222  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:29.344268  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:29.482768  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:29.482812  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:29.541274  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:29.541317  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.577842  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:29.577876  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.613556  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:29.613595  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.654840  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:29.654886  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.711929  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:29.711974  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.749746  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:29.749802  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.782899  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:29.782932  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:30.286425  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:30.286488  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:30.328588  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328616  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:30.328686  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:30.328701  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328715  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328729  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328745  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:30.328754  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328762  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:32.741952  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.241640  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:33.516723  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.516887  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.339646  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:29:40.339676  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.339681  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.339685  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.339690  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.339694  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.339698  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.339705  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.339711  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.339722  255057 system_pods.go:74] duration metric: took 11.448139171s to wait for pod list to return data ...
	I0817 22:29:40.339730  255057 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:29:40.344246  255057 default_sa.go:45] found service account: "default"
	I0817 22:29:40.344271  255057 default_sa.go:55] duration metric: took 4.534553ms for default service account to be created ...
	I0817 22:29:40.344280  255057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:29:40.353485  255057 system_pods.go:86] 8 kube-system pods found
	I0817 22:29:40.353521  255057 system_pods.go:89] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.353529  255057 system_pods.go:89] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.353537  255057 system_pods.go:89] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.353546  255057 system_pods.go:89] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.353553  255057 system_pods.go:89] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.353560  255057 system_pods.go:89] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.353579  255057 system_pods.go:89] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.353589  255057 system_pods.go:89] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.353598  255057 system_pods.go:126] duration metric: took 9.313259ms to wait for k8s-apps to be running ...
	I0817 22:29:40.353612  255057 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:29:40.353685  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:40.376714  255057 system_svc.go:56] duration metric: took 23.088082ms WaitForService to wait for kubelet.
	I0817 22:29:40.376759  255057 kubeadm.go:581] duration metric: took 4m44.873323742s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:29:40.377191  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:29:40.385016  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:29:40.385043  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:29:40.385055  255057 node_conditions.go:105] duration metric: took 7.857619ms to run NodePressure ...
	I0817 22:29:40.385068  255057 start.go:228] waiting for startup goroutines ...
	I0817 22:29:40.385074  255057 start.go:233] waiting for cluster config update ...
	I0817 22:29:40.385085  255057 start.go:242] writing updated cluster config ...
	I0817 22:29:40.385411  255057 ssh_runner.go:195] Run: rm -f paused
	I0817 22:29:40.457420  255057 start.go:600] kubectl: 1.28.0, cluster: 1.28.0-rc.1 (minor skew: 0)
	I0817 22:29:40.460043  255057 out.go:177] * Done! kubectl is now configured to use "no-preload-525875" cluster and "default" namespace by default
	I0817 22:29:37.242898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:37.462917  255491 pod_ready.go:81] duration metric: took 4m0.00026087s waiting for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:37.462956  255491 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:37.463009  255491 pod_ready.go:38] duration metric: took 4m10.583985022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:37.463050  255491 kubeadm.go:640] restartCluster took 4m32.042723788s
	W0817 22:29:37.463141  255491 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:37.463185  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:37.517852  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.016790  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:42.517001  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:45.016757  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:47.291163  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.322979002s)
	I0817 22:29:47.291246  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:47.305948  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:29:47.316036  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:29:47.325470  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:29:47.325519  255215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:29:47.566297  255215 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:29:47.017112  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:49.017246  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:51.018095  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:53.519020  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:56.016627  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.087786  255215 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:29:59.087860  255215 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:29:59.087991  255215 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:29:59.088169  255215 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:29:59.088306  255215 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:29:59.088388  255215 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:29:59.090358  255215 out.go:204]   - Generating certificates and keys ...
	I0817 22:29:59.090460  255215 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:29:59.090547  255215 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:29:59.090660  255215 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:29:59.090766  255215 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:29:59.090886  255215 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:29:59.090976  255215 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:29:59.091060  255215 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:29:59.091152  255215 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:29:59.091250  255215 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:29:59.091350  255215 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:29:59.091435  255215 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:29:59.091514  255215 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:29:59.091589  255215 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:29:59.091655  255215 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:29:59.091759  255215 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:29:59.091836  255215 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:29:59.091960  255215 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:29:59.092070  255215 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:29:59.092127  255215 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:29:59.092207  255215 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:29:59.094268  255215 out.go:204]   - Booting up control plane ...
	I0817 22:29:59.094408  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:29:59.094513  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:29:59.094594  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:29:59.094719  255215 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:29:59.094944  255215 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:29:59.095031  255215 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504676 seconds
	I0817 22:29:59.095206  255215 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:29:59.095401  255215 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:29:59.095494  255215 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:29:59.095757  255215 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-437183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:29:59.095844  255215 kubeadm.go:322] [bootstrap-token] Using token: 0fftkt.nm31ryo8p4990tdr
	I0817 22:29:59.097581  255215 out.go:204]   - Configuring RBAC rules ...
	I0817 22:29:59.097750  255215 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:29:59.097884  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:29:59.098097  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:29:59.098258  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:29:59.098405  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:29:59.098510  255215 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:29:59.098679  255215 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:29:59.098745  255215 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:29:59.098802  255215 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:29:59.098811  255215 kubeadm.go:322] 
	I0817 22:29:59.098889  255215 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:29:59.098898  255215 kubeadm.go:322] 
	I0817 22:29:59.099010  255215 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:29:59.099033  255215 kubeadm.go:322] 
	I0817 22:29:59.099069  255215 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:29:59.099142  255215 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:29:59.099221  255215 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:29:59.099232  255215 kubeadm.go:322] 
	I0817 22:29:59.099297  255215 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:29:59.099307  255215 kubeadm.go:322] 
	I0817 22:29:59.099365  255215 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:29:59.099374  255215 kubeadm.go:322] 
	I0817 22:29:59.099446  255215 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:29:59.099552  255215 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:29:59.099660  255215 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:29:59.099670  255215 kubeadm.go:322] 
	I0817 22:29:59.099799  255215 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:29:59.099909  255215 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:29:59.099917  255215 kubeadm.go:322] 
	I0817 22:29:59.100037  255215 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100173  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:29:59.100205  255215 kubeadm.go:322] 	--control-plane 
	I0817 22:29:59.100218  255215 kubeadm.go:322] 
	I0817 22:29:59.100348  255215 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:29:59.100359  255215 kubeadm.go:322] 
	I0817 22:29:59.100472  255215 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100610  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:29:59.100639  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:29:59.100650  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:29:59.102534  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:29:58.017949  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:00.519619  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.104107  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:29:59.128756  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:29:59.172002  255215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=embed-certs-437183 minikube.k8s.io/updated_at=2023_08_17T22_29_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.717974  255215 ops.go:34] apiserver oom_adj: -16
	I0817 22:29:59.718154  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.815994  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.419198  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.919196  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.419096  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.919517  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:02.419076  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.017120  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:05.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:02.919289  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.419268  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.919021  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.418663  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.919015  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.419325  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.919309  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.418701  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.919301  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.418670  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.919445  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.419363  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.918988  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.418788  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.918948  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.418731  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.919293  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.419374  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.578800  255215 kubeadm.go:1081] duration metric: took 12.40679081s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:11.578850  255215 kubeadm.go:406] StartCluster complete in 5m30.729798213s
	I0817 22:30:11.578877  255215 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.578990  255215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:11.581741  255215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.582107  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:11.582305  255215 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:11.582414  255215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-437183"
	I0817 22:30:11.582435  255215 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-437183"
	I0817 22:30:11.582433  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:11.582436  255215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-437183"
	I0817 22:30:11.582449  255215 addons.go:69] Setting metrics-server=true in profile "embed-certs-437183"
	I0817 22:30:11.582461  255215 addons.go:231] Setting addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:11.582465  255215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-437183"
	W0817 22:30:11.582467  255215 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:11.582521  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	W0817 22:30:11.582443  255215 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:11.582609  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.582956  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582976  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582992  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583000  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583326  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.583361  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.600606  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0817 22:30:11.601162  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.601890  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.601918  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.602386  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.603044  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.603110  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.603922  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0817 22:30:11.604193  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36125
	I0817 22:30:11.604476  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.604711  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.605320  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605342  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605474  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605500  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605874  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.605927  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.606184  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.606616  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.606654  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.622026  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0817 22:30:11.622822  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.623522  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.623556  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.624021  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.624332  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.626478  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.629171  255215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:11.627845  255215 addons.go:231] Setting addon default-storageclass=true in "embed-certs-437183"
	W0817 22:30:11.629212  255215 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:11.629267  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.628437  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0817 22:30:11.629683  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.631294  255215 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.631295  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.629905  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.631315  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:11.631339  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.632333  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.632356  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.632860  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.633085  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.635520  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.635727  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.638116  255215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:09.776936  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.313725935s)
	I0817 22:30:09.777008  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:09.794808  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:09.806086  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:09.818495  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:09.818547  255491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:30:10.061316  255491 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:30:11.636353  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.636644  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.640483  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.640486  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:11.640508  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:11.640535  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.640703  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.640905  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.641073  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.645685  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646351  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.646376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646867  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.647096  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.647286  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.647444  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.655819  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0817 22:30:11.656540  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.657308  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.657326  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.657864  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.658485  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.658520  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.679610  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0817 22:30:11.680268  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.680977  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.681013  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.681485  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.681722  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.683711  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.686274  255215 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.686297  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:11.686323  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.692154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.692160  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692245  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.692288  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692447  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.692691  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.692899  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.742259  255215 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-437183" context rescaled to 1 replicas
	I0817 22:30:11.742317  255215 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:11.744647  255215 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:07.516999  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:10.016647  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:11.746674  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:11.833127  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.853282  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:11.853316  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:11.858219  255215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.858353  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:11.889330  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.896554  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:11.896595  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:11.906260  255215 node_ready.go:49] node "embed-certs-437183" has status "Ready":"True"
	I0817 22:30:11.906292  255215 node_ready.go:38] duration metric: took 48.027482ms waiting for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.906305  255215 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:11.949379  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:11.949409  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:12.023543  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:12.131426  255215 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:14.420517  255215 pod_ready.go:102] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.647805  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.814629092s)
	I0817 22:30:14.647842  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.78945104s)
	I0817 22:30:14.647874  255215 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:14.647904  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.758517925s)
	I0817 22:30:14.647915  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648017  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648042  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648067  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648478  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.648532  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.648626  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.648638  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648656  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648882  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.649025  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.649050  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.649069  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.650529  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.650577  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.650586  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.650600  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.650614  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.651171  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.651230  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.652509  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652529  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.652688  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652708  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.175766  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.152137099s)
	I0817 22:30:15.175888  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.175915  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176344  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.176343  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.176428  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.176452  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.176488  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176915  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.178804  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.178827  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.178840  255215 addons.go:467] Verifying addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:15.180928  255215 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:30:12.018605  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.519226  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:15.182515  255215 addons.go:502] enable addons completed in 3.600222172s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:30:16.920634  255215 pod_ready.go:92] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.920664  255215 pod_ready.go:81] duration metric: took 4.789200515s waiting for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.920674  255215 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937440  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.937469  255215 pod_ready.go:81] duration metric: took 16.789093ms waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937483  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944411  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.944437  255215 pod_ready.go:81] duration metric: took 6.944986ms waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944451  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952239  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.952267  255215 pod_ready.go:81] duration metric: took 7.807798ms waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952281  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815597  255215 pod_ready.go:92] pod "kube-proxy-2f6jz" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:17.815630  255215 pod_ready.go:81] duration metric: took 863.340907ms waiting for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815644  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108648  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:18.108683  255215 pod_ready.go:81] duration metric: took 293.029473ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108693  255215 pod_ready.go:38] duration metric: took 6.202373203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:18.108726  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:18.108789  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:18.129379  255215 api_server.go:72] duration metric: took 6.38701969s to wait for apiserver process to appear ...
	I0817 22:30:18.129409  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:18.129425  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:30:18.138226  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:30:18.141542  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:18.141568  255215 api_server.go:131] duration metric: took 12.152138ms to wait for apiserver health ...
	I0817 22:30:18.141579  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:18.312736  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:30:18.312782  255215 system_pods.go:61] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.312790  255215 system_pods.go:61] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.312798  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.312804  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.312811  255215 system_pods.go:61] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.312817  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.312831  255215 system_pods.go:61] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.312841  255215 system_pods.go:61] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.312855  255215 system_pods.go:74] duration metric: took 171.269837ms to wait for pod list to return data ...
	I0817 22:30:18.312868  255215 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:18.511271  255215 default_sa.go:45] found service account: "default"
	I0817 22:30:18.511380  255215 default_sa.go:55] duration metric: took 198.492073ms for default service account to be created ...
	I0817 22:30:18.511401  255215 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:18.710880  255215 system_pods.go:86] 8 kube-system pods found
	I0817 22:30:18.710911  255215 system_pods.go:89] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.710917  255215 system_pods.go:89] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.710921  255215 system_pods.go:89] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.710926  255215 system_pods.go:89] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.710929  255215 system_pods.go:89] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.710933  255215 system_pods.go:89] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.710943  255215 system_pods.go:89] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.710949  255215 system_pods.go:89] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.710958  255215 system_pods.go:126] duration metric: took 199.549571ms to wait for k8s-apps to be running ...
	I0817 22:30:18.710967  255215 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:18.711013  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:18.725788  255215 system_svc.go:56] duration metric: took 14.807351ms WaitForService to wait for kubelet.
	I0817 22:30:18.725819  255215 kubeadm.go:581] duration metric: took 6.983465617s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:18.725846  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:18.908038  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:18.908079  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:18.908093  255215 node_conditions.go:105] duration metric: took 182.240177ms to run NodePressure ...
	I0817 22:30:18.908108  255215 start.go:228] waiting for startup goroutines ...
	I0817 22:30:18.908127  255215 start.go:233] waiting for cluster config update ...
	I0817 22:30:18.908142  255215 start.go:242] writing updated cluster config ...
	I0817 22:30:18.908536  255215 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:18.962718  255215 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:18.965052  255215 out.go:177] * Done! kubectl is now configured to use "embed-certs-437183" cluster and "default" namespace by default
	I0817 22:30:17.018314  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:19.517055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:21.517216  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:22.302082  255491 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:30:22.302198  255491 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:22.302316  255491 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:22.302392  255491 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:22.302537  255491 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:22.302623  255491 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:22.304947  255491 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:22.305043  255491 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:22.305112  255491 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:22.305227  255491 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:22.305295  255491 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:22.305389  255491 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:22.305466  255491 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:22.305540  255491 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:22.305614  255491 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:22.305703  255491 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:22.305801  255491 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:22.305861  255491 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:22.305956  255491 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:22.306043  255491 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:22.306141  255491 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:22.306231  255491 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:22.306313  255491 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:22.306462  255491 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:22.306597  255491 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:22.306674  255491 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:30:22.306778  255491 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:22.308372  255491 out.go:204]   - Booting up control plane ...
	I0817 22:30:22.308478  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:22.308565  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:22.308644  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:22.308735  255491 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:22.308942  255491 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:22.309046  255491 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003655 seconds
	I0817 22:30:22.309195  255491 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:22.309352  255491 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:22.309430  255491 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:22.309656  255491 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-321287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:30:22.309729  255491 kubeadm.go:322] [bootstrap-token] Using token: vtugjh.yrdml71jezyixk01
	I0817 22:30:22.311499  255491 out.go:204]   - Configuring RBAC rules ...
	I0817 22:30:22.311610  255491 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:30:22.311706  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:30:22.311887  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:30:22.312069  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:30:22.312240  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:30:22.312338  255491 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:30:22.312462  255491 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:30:22.312516  255491 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:30:22.312583  255491 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:30:22.312595  255491 kubeadm.go:322] 
	I0817 22:30:22.312680  255491 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:30:22.312693  255491 kubeadm.go:322] 
	I0817 22:30:22.312798  255491 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:30:22.312806  255491 kubeadm.go:322] 
	I0817 22:30:22.312847  255491 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:30:22.312926  255491 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:30:22.313008  255491 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:30:22.313016  255491 kubeadm.go:322] 
	I0817 22:30:22.313073  255491 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:30:22.313092  255491 kubeadm.go:322] 
	I0817 22:30:22.313135  255491 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:30:22.313141  255491 kubeadm.go:322] 
	I0817 22:30:22.313180  255491 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:30:22.313271  255491 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:30:22.313397  255491 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:30:22.313421  255491 kubeadm.go:322] 
	I0817 22:30:22.313561  255491 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:30:22.313670  255491 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:30:22.313691  255491 kubeadm.go:322] 
	I0817 22:30:22.313790  255491 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.313910  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:30:22.313930  255491 kubeadm.go:322] 	--control-plane 
	I0817 22:30:22.313933  255491 kubeadm.go:322] 
	I0817 22:30:22.314017  255491 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:30:22.314029  255491 kubeadm.go:322] 
	I0817 22:30:22.314161  255491 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.314324  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:30:22.314342  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:30:22.314352  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:30:22.316092  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:30:22.317823  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:30:22.330216  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:30:22.364427  255491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:30:22.364530  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.364541  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=default-k8s-diff-port-321287 minikube.k8s.io/updated_at=2023_08_17T22_30_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.398800  255491 ops.go:34] apiserver oom_adj: -16
	I0817 22:30:22.789239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.908906  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.507279  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.007071  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.507204  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.007980  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.507764  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.007834  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.507449  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.518185  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:26.017066  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:27.007162  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:27.507978  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.008024  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.507376  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.007583  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.507355  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.007416  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.507014  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.007539  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.507116  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.516778  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:31.016979  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:32.007363  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:32.508019  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.007624  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.507337  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.007239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.507255  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.007804  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.507323  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.647403  255491 kubeadm.go:1081] duration metric: took 13.282950211s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:35.647439  255491 kubeadm.go:406] StartCluster complete in 5m30.275148595s
	I0817 22:30:35.647465  255491 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.647562  255491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:35.649294  255491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.649625  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:35.649672  255491 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:35.649793  255491 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649815  255491 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.649827  255491 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:35.649857  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:35.649897  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.649914  255491 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649931  255491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-321287"
	I0817 22:30:35.650130  255491 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.650154  255491 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.650163  255491 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:35.650207  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.650360  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650362  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650397  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650456  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650616  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650660  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.666863  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0817 22:30:35.666883  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0817 22:30:35.667444  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.667657  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.668085  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668105  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668245  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668256  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668780  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.669523  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.669553  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.670006  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:30:35.670382  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.670448  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.670513  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.670985  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.671005  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.671824  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.672870  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.672905  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.682146  255491 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.682167  255491 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:35.682200  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.682640  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.682674  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.690436  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0817 22:30:35.691039  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.691642  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.691666  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.692056  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.692328  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.692416  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0817 22:30:35.693048  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.693566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.693588  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.693974  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.694180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.694314  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.696623  255491 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:35.696015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.698535  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:35.698555  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:35.698593  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.700284  255491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:35.702071  255491 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.702097  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:35.702127  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.703050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.703111  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.703161  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703297  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.703498  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.703605  255491 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-321287" context rescaled to 1 replicas
	I0817 22:30:35.703641  255491 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:35.706989  255491 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:35.703707  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.707227  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.707832  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40363
	I0817 22:30:35.708116  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.709223  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:35.709358  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.709408  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.709426  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.709650  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.709767  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.709979  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.710587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.710608  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.711008  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.711578  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.711631  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.730317  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35051
	I0817 22:30:35.730875  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.731566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.731595  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.731993  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.732228  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.734475  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.734778  255491 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.734799  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:35.734822  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.737878  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.738359  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738478  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.739396  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.739599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.739850  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.902960  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.913205  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.936947  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:35.936977  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:35.977717  255491 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.977876  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:35.984231  255491 node_ready.go:49] node "default-k8s-diff-port-321287" has status "Ready":"True"
	I0817 22:30:35.984286  255491 node_ready.go:38] duration metric: took 6.524258ms waiting for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.984302  255491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:36.008884  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:36.008915  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:36.010024  255491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.073572  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.073607  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:36.139665  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.382827  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.382863  255491 pod_ready.go:81] duration metric: took 372.809939ms waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.382878  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513607  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.513640  255491 pod_ready.go:81] duration metric: took 130.752675ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513653  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610942  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.610974  255491 pod_ready.go:81] duration metric: took 97.312774ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610989  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:33.017198  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:34.709633  254975 pod_ready.go:81] duration metric: took 4m0.001081095s waiting for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	E0817 22:30:34.709679  254975 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:30:34.709709  254975 pod_ready.go:38] duration metric: took 4m1.187941338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:34.709762  254975 kubeadm.go:640] restartCluster took 5m3.210628062s
	W0817 22:30:34.709854  254975 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:30:34.709895  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:30:38.629738  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.716488882s)
	I0817 22:30:38.629799  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.651889874s)
	I0817 22:30:38.629829  255491 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:38.629802  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629871  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.629753  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.726738359s)
	I0817 22:30:38.629944  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629971  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630368  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630389  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630401  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630429  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630528  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630559  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630578  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630587  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630677  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.630707  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630732  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630973  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630991  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.631004  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.631007  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.631015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.632993  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.633019  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.633033  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.758987  255491 pod_ready.go:102] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:39.084274  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.944554423s)
	I0817 22:30:39.084336  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.084785  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.084799  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:39.084817  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.084829  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084842  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.085152  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.085168  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.085179  255491 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-321287"
	I0817 22:30:39.087296  255491 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:30:39.089202  255491 addons.go:502] enable addons completed in 3.439530445s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:30:41.238328  255491 pod_ready.go:92] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.238358  255491 pod_ready.go:81] duration metric: took 4.627360634s waiting for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.238376  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.244985  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.245011  255491 pod_ready.go:81] duration metric: took 6.626883ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.245022  255491 pod_ready.go:38] duration metric: took 5.260700173s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:41.245042  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:41.245097  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:41.262899  255491 api_server.go:72] duration metric: took 5.559222986s to wait for apiserver process to appear ...
	I0817 22:30:41.262935  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:41.262957  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:30:41.268642  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:30:41.269921  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:41.269947  255491 api_server.go:131] duration metric: took 7.005146ms to wait for apiserver health ...
	I0817 22:30:41.269955  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:41.276807  255491 system_pods.go:59] 9 kube-system pods found
	I0817 22:30:41.276844  255491 system_pods.go:61] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.276855  255491 system_pods.go:61] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.276863  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.276868  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.276875  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.276883  255491 system_pods.go:61] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.276890  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.276908  255491 system_pods.go:61] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.276918  255491 system_pods.go:61] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.276929  255491 system_pods.go:74] duration metric: took 6.967523ms to wait for pod list to return data ...
	I0817 22:30:41.276941  255491 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:41.279696  255491 default_sa.go:45] found service account: "default"
	I0817 22:30:41.279724  255491 default_sa.go:55] duration metric: took 2.773544ms for default service account to be created ...
	I0817 22:30:41.279735  255491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:41.286220  255491 system_pods.go:86] 9 kube-system pods found
	I0817 22:30:41.286258  255491 system_pods.go:89] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.286269  255491 system_pods.go:89] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.286277  255491 system_pods.go:89] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.286283  255491 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.286287  255491 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.286292  255491 system_pods.go:89] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.286296  255491 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.286302  255491 system_pods.go:89] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.286306  255491 system_pods.go:89] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.286316  255491 system_pods.go:126] duration metric: took 6.576272ms to wait for k8s-apps to be running ...
	I0817 22:30:41.286326  255491 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:41.286373  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:41.301841  255491 system_svc.go:56] duration metric: took 15.499888ms WaitForService to wait for kubelet.
	I0817 22:30:41.301874  255491 kubeadm.go:581] duration metric: took 5.598205886s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:41.301898  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:41.306253  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:41.306289  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:41.306300  255491 node_conditions.go:105] duration metric: took 4.396496ms to run NodePressure ...
	I0817 22:30:41.306311  255491 start.go:228] waiting for startup goroutines ...
	I0817 22:30:41.306320  255491 start.go:233] waiting for cluster config update ...
	I0817 22:30:41.306329  255491 start.go:242] writing updated cluster config ...
	I0817 22:30:41.306617  255491 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:41.363947  255491 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:41.366167  255491 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-321287" cluster and "default" namespace by default
	I0817 22:30:47.861835  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.151914062s)
	I0817 22:30:47.861926  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:47.877704  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:47.888385  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:47.898212  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:47.898269  254975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0817 22:30:47.957871  254975 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0817 22:30:47.958020  254975 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:48.121563  254975 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:48.121724  254975 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:48.121869  254975 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:48.316212  254975 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:48.316379  254975 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:48.324040  254975 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0817 22:30:48.453946  254975 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:48.456278  254975 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:48.456403  254975 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:48.456486  254975 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:48.456629  254975 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:48.456723  254975 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:48.456831  254975 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:48.456916  254975 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:48.456992  254975 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:48.457084  254975 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:48.457233  254975 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:48.457347  254975 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:48.457400  254975 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:48.457478  254975 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:48.599977  254975 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:48.760474  254975 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:48.873066  254975 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:48.958450  254975 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:48.959335  254975 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:48.961565  254975 out.go:204]   - Booting up control plane ...
	I0817 22:30:48.961672  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:48.972854  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:48.974149  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:48.975110  254975 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:48.981334  254975 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:58.986028  254975 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004044 seconds
	I0817 22:30:58.986232  254975 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:59.005484  254975 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:59.530563  254975 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:59.530730  254975 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-294781 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0817 22:31:00.039739  254975 kubeadm.go:322] [bootstrap-token] Using token: y5v57w.cds9r5wk990e6rgq
	I0817 22:31:00.041700  254975 out.go:204]   - Configuring RBAC rules ...
	I0817 22:31:00.041831  254975 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:31:00.051302  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:31:00.056478  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:31:00.060403  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:31:00.065454  254975 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:31:00.155583  254975 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:31:00.472429  254975 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:31:00.474442  254975 kubeadm.go:322] 
	I0817 22:31:00.474512  254975 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:31:00.474554  254975 kubeadm.go:322] 
	I0817 22:31:00.474671  254975 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:31:00.474686  254975 kubeadm.go:322] 
	I0817 22:31:00.474708  254975 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:31:00.474808  254975 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:31:00.474883  254975 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:31:00.474895  254975 kubeadm.go:322] 
	I0817 22:31:00.474973  254975 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:31:00.475082  254975 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:31:00.475179  254975 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:31:00.475193  254975 kubeadm.go:322] 
	I0817 22:31:00.475308  254975 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0817 22:31:00.475421  254975 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:31:00.475431  254975 kubeadm.go:322] 
	I0817 22:31:00.475551  254975 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.475696  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:31:00.475750  254975 kubeadm.go:322]     --control-plane 	  
	I0817 22:31:00.475759  254975 kubeadm.go:322] 
	I0817 22:31:00.475881  254975 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:31:00.475937  254975 kubeadm.go:322] 
	I0817 22:31:00.476044  254975 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.476196  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:31:00.476725  254975 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:31:00.476766  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:31:00.476782  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:31:00.478932  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:31:00.480754  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:31:00.496449  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:31:00.527578  254975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:31:00.527658  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.527769  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=old-k8s-version-294781 minikube.k8s.io/updated_at=2023_08_17T22_31_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.809784  254975 ops.go:34] apiserver oom_adj: -16
	I0817 22:31:00.809925  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.991957  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:01.627311  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.126890  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.626673  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.127657  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.627284  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.127320  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.627026  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.127336  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.626721  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.127279  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.626697  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.127307  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.626920  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.127266  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.626970  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.126923  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.626808  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.127298  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.627182  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.126639  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.626681  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.127321  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.626904  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.127274  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.627272  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.127457  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.627280  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.127333  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.231130  254975 kubeadm.go:1081] duration metric: took 14.703542822s to wait for elevateKubeSystemPrivileges.
	I0817 22:31:15.231183  254975 kubeadm.go:406] StartCluster complete in 5m43.780243338s
	I0817 22:31:15.231254  254975 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.231391  254975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:31:15.233245  254975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.233533  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:31:15.233848  254975 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:31:15.233927  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:31:15.233947  254975 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-294781"
	I0817 22:31:15.233968  254975 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-294781"
	W0817 22:31:15.233977  254975 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:31:15.233983  254975 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234001  254975 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234007  254975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-294781"
	I0817 22:31:15.234021  254975 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-294781"
	W0817 22:31:15.234040  254975 addons.go:240] addon metrics-server should already be in state true
	I0817 22:31:15.234075  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234097  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234576  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234581  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234650  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.252847  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0817 22:31:15.252891  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38451
	I0817 22:31:15.253743  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.253833  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.254616  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254632  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.254713  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0817 22:31:15.254887  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254906  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.255216  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255276  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.255294  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255865  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255872  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255960  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.255977  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.256400  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.256604  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.269860  254975 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-294781"
	W0817 22:31:15.269883  254975 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:31:15.269911  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.270304  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.270335  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.273014  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0817 22:31:15.273532  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.274114  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.274134  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.274549  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.274769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.276415  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.276491  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0817 22:31:15.278935  254975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:31:15.277380  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.278041  254975 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-294781" context rescaled to 1 replicas
	I0817 22:31:15.280642  254975 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:31:15.282441  254975 out.go:177] * Verifying Kubernetes components...
	I0817 22:31:15.280856  254975 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.281832  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.284263  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.284347  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:31:15.284348  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:31:15.284366  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.285256  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.285580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.288289  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.288456  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.290643  254975 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:31:15.289601  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.289769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.292678  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:31:15.292693  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:31:15.292721  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.292776  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.293060  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.293277  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.293791  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.297193  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0817 22:31:15.297816  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.298486  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.298506  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.298962  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.299508  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.299531  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.300275  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.300994  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.301024  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.301098  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.301296  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.301502  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.301651  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.321283  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0817 22:31:15.321876  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.322943  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.322971  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.323496  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.323842  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.326563  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.326910  254975 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.326933  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:31:15.326957  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.330190  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.330947  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.330978  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.331193  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.331422  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.331552  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.331681  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.497277  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.529500  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.531359  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:31:15.531381  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:31:15.585477  254975 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.585494  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:31:15.590969  254975 node_ready.go:49] node "old-k8s-version-294781" has status "Ready":"True"
	I0817 22:31:15.591001  254975 node_ready.go:38] duration metric: took 5.470452ms waiting for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.591012  254975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:15.594026  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:31:15.594077  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:31:15.596784  254975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:15.638420  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:15.638455  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:31:15.707735  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:16.690916  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.193582768s)
	I0817 22:31:16.690987  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691002  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691002  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161462189s)
	I0817 22:31:16.691042  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105375097s)
	I0817 22:31:16.691044  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691217  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691158  254975 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0817 22:31:16.691422  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691464  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691490  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691561  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691512  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691586  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691603  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691630  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691813  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691832  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692047  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692086  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692110  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.692130  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.692114  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.692460  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692480  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828440  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.120652237s)
	I0817 22:31:16.828511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828525  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.828913  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.828939  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828952  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828963  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.829228  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.829252  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.829264  254975 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-294781"
	I0817 22:31:16.829279  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.831430  254975 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:31:16.834005  254975 addons.go:502] enable addons completed in 1.600151352s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:31:17.618673  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.110224  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.610989  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.611015  254975 pod_ready.go:81] duration metric: took 5.014205232s waiting for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.611025  254975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616618  254975 pod_ready.go:92] pod "kube-proxy-44jmp" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.616639  254975 pod_ready.go:81] duration metric: took 5.608097ms waiting for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616646  254975 pod_ready.go:38] duration metric: took 5.025620457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:20.616695  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:31:20.616748  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:31:20.633102  254975 api_server.go:72] duration metric: took 5.352419031s to wait for apiserver process to appear ...
	I0817 22:31:20.633131  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:31:20.633152  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:31:20.640585  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:31:20.641784  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:31:20.641807  254975 api_server.go:131] duration metric: took 8.66923ms to wait for apiserver health ...
	I0817 22:31:20.641815  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:31:20.647851  254975 system_pods.go:59] 4 kube-system pods found
	I0817 22:31:20.647904  254975 system_pods.go:61] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.647909  254975 system_pods.go:61] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.647917  254975 system_pods.go:61] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.647923  254975 system_pods.go:61] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.647929  254975 system_pods.go:74] duration metric: took 6.108947ms to wait for pod list to return data ...
	I0817 22:31:20.647937  254975 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:31:20.651451  254975 default_sa.go:45] found service account: "default"
	I0817 22:31:20.651485  254975 default_sa.go:55] duration metric: took 3.540013ms for default service account to be created ...
	I0817 22:31:20.651496  254975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:31:20.655529  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.655556  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.655561  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.655567  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.655575  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.655593  254975 retry.go:31] will retry after 194.203175ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:20.860033  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.860063  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.860069  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.860076  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.860082  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.860098  254975 retry.go:31] will retry after 273.217607ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.138457  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.138483  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.138488  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.138494  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.138501  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.138520  254975 retry.go:31] will retry after 311.999616ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.455473  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.455507  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.455513  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.455519  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.455526  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.455542  254975 retry.go:31] will retry after 462.378441ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.922656  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.922695  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.922703  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.922714  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.922724  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.922743  254975 retry.go:31] will retry after 595.850716ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:22.525024  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:22.525067  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:22.525076  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:22.525087  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:22.525100  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:22.525123  254975 retry.go:31] will retry after 916.880182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:23.446648  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:23.446678  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:23.446684  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:23.446691  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:23.446697  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:23.446717  254975 retry.go:31] will retry after 1.080769148s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:24.532239  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:24.532270  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:24.532277  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:24.532287  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:24.532296  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:24.532325  254975 retry.go:31] will retry after 1.261174641s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:25.798397  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:25.798430  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:25.798435  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:25.798442  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:25.798449  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:25.798465  254975 retry.go:31] will retry after 1.383083099s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:27.187782  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:27.187816  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:27.187821  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:27.187828  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:27.187834  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:27.187852  254975 retry.go:31] will retry after 1.954135672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:29.148294  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:29.148325  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:29.148330  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:29.148337  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:29.148344  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:29.148359  254975 retry.go:31] will retry after 2.632641562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:31.786946  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:31.786981  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:31.786988  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:31.786998  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:31.787010  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:31.787030  254975 retry.go:31] will retry after 3.626446493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:35.421023  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:35.421053  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:35.421059  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:35.421065  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:35.421072  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:35.421089  254975 retry.go:31] will retry after 2.800907689s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:38.228118  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:38.228155  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:38.228165  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:38.228177  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:38.228187  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:38.228204  254975 retry.go:31] will retry after 3.699626464s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:41.932868  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:41.932902  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:41.932908  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:41.932915  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:41.932922  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:41.932939  254975 retry.go:31] will retry after 6.965217948s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:48.913824  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:48.913866  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:48.913875  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:48.913899  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:48.913909  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:48.913931  254975 retry.go:31] will retry after 7.880328521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:56.800829  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:56.800868  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:56.800876  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:56.800887  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:56.800893  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:56.800915  254975 retry.go:31] will retry after 7.054585059s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:32:03.878268  254975 system_pods.go:86] 7 kube-system pods found
	I0817 22:32:03.878297  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:03.878304  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Pending
	I0817 22:32:03.878308  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Pending
	I0817 22:32:03.878311  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:03.878316  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:03.878324  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:03.878331  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:03.878351  254975 retry.go:31] will retry after 13.129481457s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0817 22:32:17.015570  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:17.015609  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:17.015619  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:17.015627  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:17.015634  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Pending
	I0817 22:32:17.015640  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:17.015647  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:17.015672  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:17.015682  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:17.015709  254975 retry.go:31] will retry after 15.332291563s: missing components: kube-controller-manager
	I0817 22:32:32.354549  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:32.354587  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:32.354596  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:32.354603  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:32.354613  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Running
	I0817 22:32:32.354619  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:32.354626  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:32.354637  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:32.354646  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:32.354657  254975 system_pods.go:126] duration metric: took 1m11.703154434s to wait for k8s-apps to be running ...
	I0817 22:32:32.354700  254975 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:32:32.354766  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:32:32.372492  254975 system_svc.go:56] duration metric: took 17.765249ms WaitForService to wait for kubelet.
	I0817 22:32:32.372541  254975 kubeadm.go:581] duration metric: took 1m17.091866023s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:32:32.372573  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:32:32.377413  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:32:32.377442  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:32:32.377455  254975 node_conditions.go:105] duration metric: took 4.875282ms to run NodePressure ...
	I0817 22:32:32.377467  254975 start.go:228] waiting for startup goroutines ...
	I0817 22:32:32.377473  254975 start.go:233] waiting for cluster config update ...
	I0817 22:32:32.377483  254975 start.go:242] writing updated cluster config ...
	I0817 22:32:32.377828  254975 ssh_runner.go:195] Run: rm -f paused
	I0817 22:32:32.433865  254975 start.go:600] kubectl: 1.28.0, cluster: 1.16.0 (minor skew: 12)
	I0817 22:32:32.436131  254975 out.go:177] 
	W0817 22:32:32.437621  254975 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0817 22:32:32.439072  254975 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0817 22:32:32.440794  254975 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-294781" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 22:24:26 UTC, ends at Thu 2023-08-17 22:39:20 UTC. --
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.733685259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8e51c455-ac72-454e-8286-d9bd269e0b13 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.775716456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=15d766f7-a0c8-4e5c-9cea-21727ec8a1b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.775801218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=15d766f7-a0c8-4e5c-9cea-21727ec8a1b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.776164637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=15d766f7-a0c8-4e5c-9cea-21727ec8a1b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.788256400Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=cfb72314-3703-4c5b-9dd0-d988933e1b8b name=/runtime.v1.ImageService/ListImages
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.788437781Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.788589664Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.788664747Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.788737279Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.788807047Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.788876720Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.789045462Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.789123717Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.789196115Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.789269329Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"" file="storage/storage_transport.go:185"
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.789436992Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,RepoTags:[registry.k8s.io/kube-apiserver:v1.27.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854],Size_:122078160,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,RepoTags:[registry.k8s.io/kube-controller-manager:v1.27.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265 registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934],Size_:113931062,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:98ef2570f3cde33
e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,RepoTags:[registry.k8s.io/kube-scheduler:v1.27.4],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7],Size_:59814710,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,RepoTags:[registry.k8s.io/kube-proxy:v1.27.4],RepoDigests:[registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3],Size_:72714135,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd280
01e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,RepoTags:[registry.k8s.io/etcd:3.5.7-0],RepoDigests:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9],Size_:297083935,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:
[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,RepoTags:[docker.io/kindest/kindnetd:v20230511-dc714da8],RepoDigests:[docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974 docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9],Size_:65249302,Uid:nil,Username:,Spec:nil,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Use
rname:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=cfb72314-3703-4c5b-9dd0-d988933e1b8b name=/runtime.v1.ImageService/ListImages
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.833760508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=49c99e61-3d33-4931-97ca-e015e3f14d8d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.833841876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=49c99e61-3d33-4931-97ca-e015e3f14d8d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.834133683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=49c99e61-3d33-4931-97ca-e015e3f14d8d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.877688222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4cbd0785-0e52-4bf2-9613-ef890d576d30 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.877790185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4cbd0785-0e52-4bf2-9613-ef890d576d30 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.878130444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4cbd0785-0e52-4bf2-9613-ef890d576d30 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.917842015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ab14c7cd-ce99-4f38-8de9-4ff1f5002f41 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.918002400Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ab14c7cd-ce99-4f38-8de9-4ff1f5002f41 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:20 embed-certs-437183 crio[726]: time="2023-08-17 22:39:20.918220636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ab14c7cd-ce99-4f38-8de9-4ff1f5002f41 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	5a080e9202ff1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a883fa663ed47
	70009069c37b7       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   9 minutes ago       Running             kube-proxy                0                   d1a15a97e0abc
	c78fe32267075       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   6fff8c5415a66
	1adcff7bb1e0f       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   9 minutes ago       Running             etcd                      2                   1ab6dfcb94f4b
	22c0de40f713b       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   9 minutes ago       Running             kube-scheduler            2                   897ac10edd52d
	7b97bbae5144d       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   9 minutes ago       Running             kube-apiserver            2                   9199bfa59e5a8
	5c17e7df1c775       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   9 minutes ago       Running             kube-controller-manager   2                   507891bfee492
	
	* 
	* ==> coredns [c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-437183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-437183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=embed-certs-437183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T22_29_59_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 22:29:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-437183
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 22:39:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 22:35:25 +0000   Thu, 17 Aug 2023 22:29:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 22:35:25 +0000   Thu, 17 Aug 2023 22:29:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 22:35:25 +0000   Thu, 17 Aug 2023 22:29:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 22:35:25 +0000   Thu, 17 Aug 2023 22:29:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    embed-certs-437183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e92f5f8fc5a04000a28188bccb075951
	  System UUID:                e92f5f8f-c5a0-4000-a281-88bccb075951
	  Boot ID:                    f4abf1b1-764f-4721-bf35-e191b40359b8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-ghvnx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m10s
	  kube-system                 etcd-embed-certs-437183                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-embed-certs-437183             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-controller-manager-embed-certs-437183    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                 kube-proxy-2f6jz                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                 kube-scheduler-embed-certs-437183             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-74d5c6b9c-9zstm                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m22s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s  kubelet          Node embed-certs-437183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s  kubelet          Node embed-certs-437183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s  kubelet          Node embed-certs-437183 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m22s  kubelet          Node embed-certs-437183 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m22s  kubelet          Node embed-certs-437183 status is now: NodeReady
	  Normal  RegisteredNode           9m11s  node-controller  Node embed-certs-437183 event: Registered Node embed-certs-437183 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug17 22:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072947] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.403841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.589661] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154794] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.493867] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.453439] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.136518] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.171727] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.129850] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.268690] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +17.640819] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[Aug17 22:25] hrtimer: interrupt took 6556907 ns
	[  +6.601452] kauditd_printk_skb: 19 callbacks suppressed
	[Aug17 22:29] kauditd_printk_skb: 4 callbacks suppressed
	[ +29.401071] systemd-fstab-generator[3564]: Ignoring "noauto" for root device
	[  +9.841115] systemd-fstab-generator[3892]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074] <==
	* {"level":"info","ts":"2023-08-17T22:29:52.961Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T22:29:52.962Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T22:29:52.962Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T22:29:52.959Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2023-08-17T22:29:52.962Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2023-08-17T22:29:52.962Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"1bfd5d64eb00b2d5","initial-advertise-peer-urls":["https://192.168.39.186:2380"],"listen-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.186:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-17T22:29:52.962Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-17T22:29:53.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-17T22:29:53.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-17T22:29:53.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgPreVoteResp from 1bfd5d64eb00b2d5 at term 1"}
	{"level":"info","ts":"2023-08-17T22:29:53.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became candidate at term 2"}
	{"level":"info","ts":"2023-08-17T22:29:53.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgVoteResp from 1bfd5d64eb00b2d5 at term 2"}
	{"level":"info","ts":"2023-08-17T22:29:53.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became leader at term 2"}
	{"level":"info","ts":"2023-08-17T22:29:53.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bfd5d64eb00b2d5 elected leader 1bfd5d64eb00b2d5 at term 2"}
	{"level":"info","ts":"2023-08-17T22:29:53.242Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:29:53.244Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1bfd5d64eb00b2d5","local-member-attributes":"{Name:embed-certs-437183 ClientURLs:[https://192.168.39.186:2379]}","request-path":"/0/members/1bfd5d64eb00b2d5/attributes","cluster-id":"7d06a36b1777ee5c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T22:29:53.244Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T22:29:53.246Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-17T22:29:53.246Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.186:2379"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:29:53.247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  22:39:21 up 15 min,  0 users,  load average: 0.21, 0.27, 0.28
	Linux embed-certs-437183 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19] <==
	* E0817 22:34:56.348991       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:34:56.350203       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:35:55.225469       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.245.1:443: connect: connection refused
	I0817 22:35:55.225680       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:35:56.349101       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:35:56.349237       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:35:56.349269       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:35:56.350407       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:35:56.350502       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:35:56.350529       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:36:55.225182       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.245.1:443: connect: connection refused
	I0817 22:36:55.225426       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0817 22:37:55.224802       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.245.1:443: connect: connection refused
	I0817 22:37:55.224863       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:37:56.350234       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:37:56.350337       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:37:56.350346       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:37:56.351649       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:37:56.351739       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:37:56.351806       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:38:55.224389       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.245.1:443: connect: connection refused
	I0817 22:38:55.224476       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918] <==
	* W0817 22:33:11.392726       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:33:40.959696       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:33:41.405456       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:34:10.967103       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:34:11.416685       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:34:40.974402       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:34:41.431668       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:35:10.982051       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:35:11.441466       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:35:40.988421       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:35:41.450106       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:36:10.996026       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:36:11.462738       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:36:41.003682       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:36:41.471887       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:37:11.011364       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:37:11.482795       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:37:41.018251       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:37:41.493306       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:38:11.027503       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:38:11.505685       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:38:41.035771       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:38:41.531049       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:39:11.042734       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:39:11.541408       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c] <==
	* I0817 22:30:17.088623       1 node.go:141] Successfully retrieved node IP: 192.168.39.186
	I0817 22:30:17.088757       1 server_others.go:110] "Detected node IP" address="192.168.39.186"
	I0817 22:30:17.088791       1 server_others.go:554] "Using iptables proxy"
	I0817 22:30:17.134247       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0817 22:30:17.134313       1 server_others.go:192] "Using iptables Proxier"
	I0817 22:30:17.134367       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 22:30:17.135077       1 server.go:658] "Version info" version="v1.27.4"
	I0817 22:30:17.135116       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:30:17.136274       1 config.go:188] "Starting service config controller"
	I0817 22:30:17.136348       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 22:30:17.136400       1 config.go:97] "Starting endpoint slice config controller"
	I0817 22:30:17.136434       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 22:30:17.137447       1 config.go:315] "Starting node config controller"
	I0817 22:30:17.137482       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 22:30:17.236834       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 22:30:17.236837       1 shared_informer.go:318] Caches are synced for service config
	I0817 22:30:17.237607       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e] <==
	* W0817 22:29:56.197194       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 22:29:56.197253       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0817 22:29:56.269111       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 22:29:56.269200       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0817 22:29:56.512582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 22:29:56.512637       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0817 22:29:56.516621       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 22:29:56.516649       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0817 22:29:56.574311       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 22:29:56.574412       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0817 22:29:56.615114       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 22:29:56.615221       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0817 22:29:56.623789       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 22:29:56.623875       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0817 22:29:56.623889       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 22:29:56.624000       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0817 22:29:56.689355       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 22:29:56.689452       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0817 22:29:56.707896       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 22:29:56.708006       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0817 22:29:56.712322       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 22:29:56.712386       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0817 22:29:56.803734       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 22:29:56.803792       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 22:29:59.355271       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 22:24:26 UTC, ends at Thu 2023-08-17 22:39:21 UTC. --
	Aug 17 22:36:35 embed-certs-437183 kubelet[3899]: E0817 22:36:35.424433    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:36:50 embed-certs-437183 kubelet[3899]: E0817 22:36:50.419673    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:36:59 embed-certs-437183 kubelet[3899]: E0817 22:36:59.507649    3899 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:36:59 embed-certs-437183 kubelet[3899]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:36:59 embed-certs-437183 kubelet[3899]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:36:59 embed-certs-437183 kubelet[3899]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:37:04 embed-certs-437183 kubelet[3899]: E0817 22:37:04.420118    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:37:19 embed-certs-437183 kubelet[3899]: E0817 22:37:19.421097    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:37:32 embed-certs-437183 kubelet[3899]: E0817 22:37:32.419789    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:37:46 embed-certs-437183 kubelet[3899]: E0817 22:37:46.419880    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:37:58 embed-certs-437183 kubelet[3899]: E0817 22:37:58.420124    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:37:59 embed-certs-437183 kubelet[3899]: E0817 22:37:59.512378    3899 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:37:59 embed-certs-437183 kubelet[3899]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:37:59 embed-certs-437183 kubelet[3899]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:37:59 embed-certs-437183 kubelet[3899]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:38:11 embed-certs-437183 kubelet[3899]: E0817 22:38:11.422213    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:38:22 embed-certs-437183 kubelet[3899]: E0817 22:38:22.419887    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:38:37 embed-certs-437183 kubelet[3899]: E0817 22:38:37.420239    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:38:49 embed-certs-437183 kubelet[3899]: E0817 22:38:49.423241    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:38:59 embed-certs-437183 kubelet[3899]: E0817 22:38:59.509716    3899 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:38:59 embed-certs-437183 kubelet[3899]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:38:59 embed-certs-437183 kubelet[3899]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:38:59 embed-certs-437183 kubelet[3899]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:39:04 embed-certs-437183 kubelet[3899]: E0817 22:39:04.420245    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:39:16 embed-certs-437183 kubelet[3899]: E0817 22:39:16.420493    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	
	* 
	* ==> storage-provisioner [5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577] <==
	* I0817 22:30:16.993753       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 22:30:17.012134       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 22:30:17.012256       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 22:30:17.036047       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 22:30:17.036595       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-437183_7b43298b-a344-4382-9361-149305b30baa!
	I0817 22:30:17.041028       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f9b2b7ab-b416-4200-93c3-29398470d58a", APIVersion:"v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-437183_7b43298b-a344-4382-9361-149305b30baa became leader
	I0817 22:30:17.141279       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-437183_7b43298b-a344-4382-9361-149305b30baa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-437183 -n embed-certs-437183
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-437183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-9zstm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-437183 describe pod metrics-server-74d5c6b9c-9zstm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-437183 describe pod metrics-server-74d5c6b9c-9zstm: exit status 1 (75.197277ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-9zstm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-437183 describe pod metrics-server-74d5c6b9c-9zstm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0817 22:30:50.284184  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:30:55.683605  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:32:07.553610  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 22:32:13.351387  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:32:14.045597  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:32:18.730324  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-08-17 22:39:41.934981224 +0000 UTC m=+5355.575748952
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-321287 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-321287 logs -n 25: (1.680582217s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-975779 sudo cat                              | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo find                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo crio                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-975779                                       | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-340676 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | disable-driver-mounts-340676                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:17 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-294781        | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-525875             | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-437183            | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-321287  | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-294781             | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-525875                  | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:29 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-437183                 | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-321287       | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC | 17 Aug 23 22:30 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 22:20:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 22:20:16.712686  255491 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:20:16.712825  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.712835  255491 out.go:309] Setting ErrFile to fd 2...
	I0817 22:20:16.712839  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.713062  255491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:20:16.713667  255491 out.go:303] Setting JSON to false
	I0817 22:20:16.714624  255491 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25342,"bootTime":1692285475,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:20:16.714682  255491 start.go:138] virtualization: kvm guest
	I0817 22:20:16.717535  255491 out.go:177] * [default-k8s-diff-port-321287] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:20:16.719151  255491 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:20:16.720536  255491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:20:16.719158  255491 notify.go:220] Checking for updates...
	I0817 22:20:16.724470  255491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:20:16.726182  255491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:20:16.727902  255491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:20:16.729516  255491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:20:16.731373  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:20:16.731749  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.731825  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.746961  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0817 22:20:16.747404  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.748088  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.748116  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.748449  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.748618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.748847  255491 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:20:16.749194  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.749239  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.764882  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I0817 22:20:16.765357  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.765874  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.765901  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.766289  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.766480  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.802457  255491 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:20:16.804215  255491 start.go:298] selected driver: kvm2
	I0817 22:20:16.804235  255491 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 Cl
usterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.804379  255491 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:20:16.805157  255491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.805248  255491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:20:16.821166  255491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:20:16.821564  255491 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 22:20:16.821606  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:20:16.821619  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:20:16.821631  255491 start_flags.go:319] config:
	{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.821815  255491 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.823863  255491 out.go:177] * Starting control plane node default-k8s-diff-port-321287 in cluster default-k8s-diff-port-321287
	I0817 22:20:16.825296  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:20:16.825350  255491 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 22:20:16.825365  255491 cache.go:57] Caching tarball of preloaded images
	I0817 22:20:16.825521  255491 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 22:20:16.825536  255491 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 22:20:16.825660  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:20:16.825870  255491 start.go:365] acquiring machines lock for default-k8s-diff-port-321287: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:20:17.790384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:20.862432  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:26.942301  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:30.014393  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:36.094411  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:39.166376  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:45.246382  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:48.318418  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:54.398388  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:57.470394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:03.550380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:06.622365  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:12.702351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:15.774370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:21.854413  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:24.926351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:31.006415  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:34.078332  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:40.158437  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:43.230410  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:49.310359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:52.382386  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:58.462394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:01.534395  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:07.614359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:10.686384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:16.766363  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:19.838352  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:25.918380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:28.990416  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:35.070383  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:38.142364  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:44.222341  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:47.294387  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:53.374378  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:56.446375  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:02.526335  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:05.598406  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:11.678435  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:14.750370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:20.830484  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:23.902346  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:29.982456  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:33.054379  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:39.134436  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:42.206472  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:48.286396  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:51.358348  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:54.362645  255057 start.go:369] acquired machines lock for "no-preload-525875" in 4m31.301140971s
	I0817 22:23:54.362883  255057 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:23:54.362929  255057 fix.go:54] fixHost starting: 
	I0817 22:23:54.363423  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:23:54.363467  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:23:54.379127  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0817 22:23:54.379699  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:23:54.380334  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:23:54.380357  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:23:54.380797  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:23:54.381004  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:23:54.381209  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:23:54.383099  255057 fix.go:102] recreateIfNeeded on no-preload-525875: state=Stopped err=<nil>
	I0817 22:23:54.383145  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	W0817 22:23:54.383332  255057 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:23:54.385187  255057 out.go:177] * Restarting existing kvm2 VM for "no-preload-525875" ...
	I0817 22:23:54.360325  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:23:54.360394  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:23:54.362467  254975 machine.go:91] provisioned docker machine in 4m37.411699893s
	I0817 22:23:54.362520  254975 fix.go:56] fixHost completed within 4m37.434281244s
	I0817 22:23:54.362529  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 4m37.434304432s
	W0817 22:23:54.362577  254975 start.go:672] error starting host: provision: host is not running
	W0817 22:23:54.363017  254975 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0817 22:23:54.363033  254975 start.go:687] Will try again in 5 seconds ...
	I0817 22:23:54.386615  255057 main.go:141] libmachine: (no-preload-525875) Calling .Start
	I0817 22:23:54.386791  255057 main.go:141] libmachine: (no-preload-525875) Ensuring networks are active...
	I0817 22:23:54.387647  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network default is active
	I0817 22:23:54.387973  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network mk-no-preload-525875 is active
	I0817 22:23:54.388332  255057 main.go:141] libmachine: (no-preload-525875) Getting domain xml...
	I0817 22:23:54.389183  255057 main.go:141] libmachine: (no-preload-525875) Creating domain...
	I0817 22:23:55.639391  255057 main.go:141] libmachine: (no-preload-525875) Waiting to get IP...
	I0817 22:23:55.640405  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.640824  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.640956  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.640807  256033 retry.go:31] will retry after 256.854902ms: waiting for machine to come up
	I0817 22:23:55.899499  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.900003  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.900027  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.899976  256033 retry.go:31] will retry after 327.686689ms: waiting for machine to come up
	I0817 22:23:56.229604  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.230132  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.230156  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.230040  256033 retry.go:31] will retry after 464.52975ms: waiting for machine to come up
	I0817 22:23:56.695962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.696359  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.696397  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.696313  256033 retry.go:31] will retry after 556.975938ms: waiting for machine to come up
	I0817 22:23:57.255156  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.255625  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.255664  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.255564  256033 retry.go:31] will retry after 654.756806ms: waiting for machine to come up
	I0817 22:23:57.911407  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.911781  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.911805  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.911733  256033 retry.go:31] will retry after 915.751745ms: waiting for machine to come up
	I0817 22:23:59.364671  254975 start.go:365] acquiring machines lock for old-k8s-version-294781: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:23:58.828834  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:58.829178  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:58.829236  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:58.829153  256033 retry.go:31] will retry after 1.176413613s: waiting for machine to come up
	I0817 22:24:00.006988  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:00.007533  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:00.007603  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:00.007525  256033 retry.go:31] will retry after 1.031006631s: waiting for machine to come up
	I0817 22:24:01.039920  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:01.040354  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:01.040386  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:01.040293  256033 retry.go:31] will retry after 1.781447675s: waiting for machine to come up
	I0817 22:24:02.823240  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:02.823711  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:02.823755  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:02.823652  256033 retry.go:31] will retry after 1.47392319s: waiting for machine to come up
	I0817 22:24:04.299094  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:04.299543  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:04.299572  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:04.299479  256033 retry.go:31] will retry after 1.990284782s: waiting for machine to come up
	I0817 22:24:06.292369  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:06.292831  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:06.292862  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:06.292749  256033 retry.go:31] will retry after 3.34318874s: waiting for machine to come up
	I0817 22:24:09.637907  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:09.638389  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:09.638423  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:09.638335  256033 retry.go:31] will retry after 3.298106143s: waiting for machine to come up
	I0817 22:24:12.939215  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939668  255057 main.go:141] libmachine: (no-preload-525875) Found IP for machine: 192.168.61.196
	I0817 22:24:12.939692  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has current primary IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939709  255057 main.go:141] libmachine: (no-preload-525875) Reserving static IP address...
	I0817 22:24:12.940293  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.940330  255057 main.go:141] libmachine: (no-preload-525875) Reserved static IP address: 192.168.61.196
	I0817 22:24:12.940347  255057 main.go:141] libmachine: (no-preload-525875) DBG | skip adding static IP to network mk-no-preload-525875 - found existing host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"}
	I0817 22:24:12.940364  255057 main.go:141] libmachine: (no-preload-525875) DBG | Getting to WaitForSSH function...
	I0817 22:24:12.940381  255057 main.go:141] libmachine: (no-preload-525875) Waiting for SSH to be available...
	I0817 22:24:12.942523  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.942835  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.942870  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.943013  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH client type: external
	I0817 22:24:12.943058  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa (-rw-------)
	I0817 22:24:12.943104  255057 main.go:141] libmachine: (no-preload-525875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:12.943125  255057 main.go:141] libmachine: (no-preload-525875) DBG | About to run SSH command:
	I0817 22:24:12.943135  255057 main.go:141] libmachine: (no-preload-525875) DBG | exit 0
	I0817 22:24:14.123211  255215 start.go:369] acquired machines lock for "embed-certs-437183" in 4m31.345681226s
	I0817 22:24:14.123281  255215 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:14.123298  255215 fix.go:54] fixHost starting: 
	I0817 22:24:14.123769  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:14.123822  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:14.141321  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0817 22:24:14.141722  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:14.142372  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:24:14.142409  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:14.142871  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:14.143076  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:14.143300  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:24:14.144928  255215 fix.go:102] recreateIfNeeded on embed-certs-437183: state=Stopped err=<nil>
	I0817 22:24:14.144960  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	W0817 22:24:14.145216  255215 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:14.148036  255215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-437183" ...
	I0817 22:24:13.033987  255057 main.go:141] libmachine: (no-preload-525875) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:13.034450  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetConfigRaw
	I0817 22:24:13.035251  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.037756  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038141  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.038176  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038475  255057 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/config.json ...
	I0817 22:24:13.038679  255057 machine.go:88] provisioning docker machine ...
	I0817 22:24:13.038704  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.038922  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039086  255057 buildroot.go:166] provisioning hostname "no-preload-525875"
	I0817 22:24:13.039109  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039238  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.041385  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041666  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.041698  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041838  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.042022  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042206  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042396  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.042612  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.043170  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.043189  255057 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-525875 && echo "no-preload-525875" | sudo tee /etc/hostname
	I0817 22:24:13.177388  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-525875
	
	I0817 22:24:13.177433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.180249  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180571  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.180599  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180808  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.181054  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181224  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181371  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.181544  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.181969  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.181994  255057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-525875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-525875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-525875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:13.307614  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:13.307675  255057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:13.307719  255057 buildroot.go:174] setting up certificates
	I0817 22:24:13.307731  255057 provision.go:83] configureAuth start
	I0817 22:24:13.307745  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.308044  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.311084  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311457  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.311491  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311665  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.313712  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314066  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.314101  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314252  255057 provision.go:138] copyHostCerts
	I0817 22:24:13.314354  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:13.314397  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:13.314495  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:13.314610  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:13.314623  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:13.314661  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:13.314735  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:13.314745  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:13.314779  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:13.314841  255057 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.no-preload-525875 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube no-preload-525875]
	I0817 22:24:13.395589  255057 provision.go:172] copyRemoteCerts
	I0817 22:24:13.395693  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:13.395724  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.398603  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.398936  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.398972  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.399154  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.399379  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.399566  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.399717  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.487194  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:13.510918  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 22:24:13.534013  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:13.556876  255057 provision.go:86] duration metric: configureAuth took 249.122979ms
	I0817 22:24:13.556910  255057 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:13.557143  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:13.557265  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.560140  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560483  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.560514  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560748  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.560965  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561143  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561274  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.561516  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.562128  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.562155  255057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:13.863145  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:13.863181  255057 machine.go:91] provisioned docker machine in 824.487372ms
	I0817 22:24:13.863206  255057 start.go:300] post-start starting for "no-preload-525875" (driver="kvm2")
	I0817 22:24:13.863219  255057 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:13.863247  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.863636  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:13.863681  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.866612  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.866950  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.867000  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.867115  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.867333  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.867524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.867695  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.957157  255057 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:13.961765  255057 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:13.961801  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:13.961919  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:13.962002  255057 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:13.962116  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:13.971105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:13.999336  255057 start.go:303] post-start completed in 136.111451ms
	I0817 22:24:13.999367  255057 fix.go:56] fixHost completed within 19.636437946s
	I0817 22:24:13.999391  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.002294  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002689  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.002717  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002995  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.003236  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003572  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.003744  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:14.004145  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:14.004160  255057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:14.122987  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311054.069328214
	
	I0817 22:24:14.123011  255057 fix.go:206] guest clock: 1692311054.069328214
	I0817 22:24:14.123019  255057 fix.go:219] Guest: 2023-08-17 22:24:14.069328214 +0000 UTC Remote: 2023-08-17 22:24:13.999370872 +0000 UTC m=+291.082280559 (delta=69.957342ms)
	I0817 22:24:14.123080  255057 fix.go:190] guest clock delta is within tolerance: 69.957342ms
	I0817 22:24:14.123087  255057 start.go:83] releasing machines lock for "no-preload-525875", held for 19.760401588s
	I0817 22:24:14.123125  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.123445  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:14.126573  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.126925  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.126962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.127146  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127781  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127974  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.128071  255057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:14.128125  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.128226  255057 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:14.128258  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.131020  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131333  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131367  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131390  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.131715  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.131789  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131829  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131895  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.131975  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.132057  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.132156  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.132272  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.132425  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.219665  255057 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:14.247437  255057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:14.400674  255057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:14.408384  255057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:14.408502  255057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:14.423811  255057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:14.423860  255057 start.go:466] detecting cgroup driver to use...
	I0817 22:24:14.423953  255057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:14.436628  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:14.448671  255057 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:14.448765  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:14.461946  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:14.475294  255057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:14.581194  255057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:14.708045  255057 docker.go:212] disabling docker service ...
	I0817 22:24:14.708110  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:14.722033  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:14.733323  255057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:14.857587  255057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:14.980798  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:14.994728  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:15.012428  255057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:15.012505  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.021683  255057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:15.021763  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.031095  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.040825  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.050770  255057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:15.060644  255057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:15.068941  255057 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:15.069022  255057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:15.081634  255057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:15.090552  255057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:15.205174  255057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:15.383127  255057 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:15.383224  255057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:15.391893  255057 start.go:534] Will wait 60s for crictl version
	I0817 22:24:15.391983  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.398121  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:15.450273  255057 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:15.450368  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.506757  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.560170  255057 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.1 ...
	I0817 22:24:14.149845  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Start
	I0817 22:24:14.150032  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring networks are active...
	I0817 22:24:14.150803  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network default is active
	I0817 22:24:14.151110  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network mk-embed-certs-437183 is active
	I0817 22:24:14.151492  255215 main.go:141] libmachine: (embed-certs-437183) Getting domain xml...
	I0817 22:24:14.152247  255215 main.go:141] libmachine: (embed-certs-437183) Creating domain...
	I0817 22:24:15.472135  255215 main.go:141] libmachine: (embed-certs-437183) Waiting to get IP...
	I0817 22:24:15.473014  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.473413  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.473492  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.473421  256157 retry.go:31] will retry after 194.38634ms: waiting for machine to come up
	I0817 22:24:15.670047  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.670479  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.670528  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.670445  256157 retry.go:31] will retry after 332.988154ms: waiting for machine to come up
	I0817 22:24:16.005357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.005862  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.005898  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.005790  256157 retry.go:31] will retry after 376.364025ms: waiting for machine to come up
	I0817 22:24:16.384423  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.384866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.384916  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.384805  256157 retry.go:31] will retry after 392.048125ms: waiting for machine to come up
	I0817 22:24:16.778356  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.778744  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.778780  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.778683  256157 retry.go:31] will retry after 688.962088ms: waiting for machine to come up
	I0817 22:24:17.469767  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:17.470257  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:17.470287  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:17.470211  256157 retry.go:31] will retry after 660.617465ms: waiting for machine to come up
	I0817 22:24:15.561695  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:15.564750  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565097  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:15.565127  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565409  255057 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:15.569673  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:15.584980  255057 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:24:15.585030  255057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:15.617365  255057 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0817 22:24:15.617396  255057 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.0-rc.1 registry.k8s.io/kube-controller-manager:v1.28.0-rc.1 registry.k8s.io/kube-scheduler:v1.28.0-rc.1 registry.k8s.io/kube-proxy:v1.28.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:24:15.617470  255057 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.617497  255057 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.617529  255057 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.617606  255057 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.617541  255057 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.617637  255057 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0817 22:24:15.617507  255057 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.617985  255057 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619154  255057 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0817 22:24:15.619338  255057 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619355  255057 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.619350  255057 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.619369  255057 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.619335  255057 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.619381  255057 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.619414  255057 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.793551  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.793935  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.796339  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.797436  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.806385  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.813161  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0817 22:24:15.840200  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.935464  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.940863  255057 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0817 22:24:15.940940  255057 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.940881  255057 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" does not exist at hash "046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd" in container runtime
	I0817 22:24:15.941028  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.941031  255057 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.941115  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952609  255057 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" does not exist at hash "e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef" in container runtime
	I0817 22:24:15.952687  255057 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.952709  255057 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0817 22:24:15.952741  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952751  255057 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.952790  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.007640  255057 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" does not exist at hash "2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d" in container runtime
	I0817 22:24:16.007686  255057 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.007740  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099763  255057 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.0-rc.1" does not exist at hash "cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8" in container runtime
	I0817 22:24:16.099817  255057 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.099873  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099909  255057 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0817 22:24:16.099969  255057 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.099980  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:16.100019  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.100052  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:16.100127  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:16.100145  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:16.100198  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.105175  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.197301  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0817 22:24:16.197377  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197418  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197432  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197437  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.197476  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.197421  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:16.197520  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197535  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.214043  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0817 22:24:16.214189  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:16.225659  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1 (exists)
	I0817 22:24:16.225690  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225750  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225882  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.225973  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.229070  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1 (exists)
	I0817 22:24:16.229235  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1 (exists)
	I0817 22:24:16.258828  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0817 22:24:16.258905  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0817 22:24:16.258990  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0817 22:24:16.259013  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:18.132851  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:18.133243  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:18.133310  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:18.133225  256157 retry.go:31] will retry after 900.178694ms: waiting for machine to come up
	I0817 22:24:19.035179  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:19.035579  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:19.035615  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:19.035514  256157 retry.go:31] will retry after 1.198702878s: waiting for machine to come up
	I0817 22:24:20.236711  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:20.237240  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:20.237273  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:20.237201  256157 retry.go:31] will retry after 1.809846012s: waiting for machine to come up
	I0817 22:24:22.048866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:22.049357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:22.049392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:22.049300  256157 retry.go:31] will retry after 1.671738979s: waiting for machine to come up
	I0817 22:24:18.395405  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1: (2.169611406s)
	I0817 22:24:18.395443  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1 from cache
	I0817 22:24:18.395478  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (2.169478272s)
	I0817 22:24:18.395493  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.136469625s)
	I0817 22:24:18.395493  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:18.395509  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0817 22:24:18.395512  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1 (exists)
	I0817 22:24:18.395560  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:20.871009  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1: (2.475415377s)
	I0817 22:24:20.871043  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1 from cache
	I0817 22:24:20.871073  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:20.871129  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:23.722312  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:23.722829  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:23.722864  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:23.722757  256157 retry.go:31] will retry after 1.856182792s: waiting for machine to come up
	I0817 22:24:25.580432  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:25.580936  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:25.580969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:25.580873  256157 retry.go:31] will retry after 2.404448523s: waiting for machine to come up
	I0817 22:24:23.529377  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1: (2.658213494s)
	I0817 22:24:23.529418  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1 from cache
	I0817 22:24:23.529456  255057 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:23.529532  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:24.907071  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.377507339s)
	I0817 22:24:24.907105  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0817 22:24:24.907135  255057 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:24.907203  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:27.988784  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:27.989226  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:27.989252  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:27.989214  256157 retry.go:31] will retry after 4.145677854s: waiting for machine to come up
	I0817 22:24:32.139031  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139722  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has current primary IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139755  255215 main.go:141] libmachine: (embed-certs-437183) Found IP for machine: 192.168.39.186
	I0817 22:24:32.139768  255215 main.go:141] libmachine: (embed-certs-437183) Reserving static IP address...
	I0817 22:24:32.140361  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.140408  255215 main.go:141] libmachine: (embed-certs-437183) Reserved static IP address: 192.168.39.186
	I0817 22:24:32.140428  255215 main.go:141] libmachine: (embed-certs-437183) DBG | skip adding static IP to network mk-embed-certs-437183 - found existing host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"}
	I0817 22:24:32.140450  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Getting to WaitForSSH function...
	I0817 22:24:32.140465  255215 main.go:141] libmachine: (embed-certs-437183) Waiting for SSH to be available...
	I0817 22:24:32.142752  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143141  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.143192  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143343  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH client type: external
	I0817 22:24:32.143392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa (-rw-------)
	I0817 22:24:32.143431  255215 main.go:141] libmachine: (embed-certs-437183) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:32.143459  255215 main.go:141] libmachine: (embed-certs-437183) DBG | About to run SSH command:
	I0817 22:24:32.143475  255215 main.go:141] libmachine: (embed-certs-437183) DBG | exit 0
	I0817 22:24:32.246211  255215 main.go:141] libmachine: (embed-certs-437183) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:32.246582  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetConfigRaw
	I0817 22:24:32.247284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.249789  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250204  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.250237  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250567  255215 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/config.json ...
	I0817 22:24:32.250808  255215 machine.go:88] provisioning docker machine ...
	I0817 22:24:32.250831  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:32.251049  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251209  255215 buildroot.go:166] provisioning hostname "embed-certs-437183"
	I0817 22:24:32.251230  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251344  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.253729  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254094  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.254124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254276  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.254434  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254654  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254817  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.254981  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.255466  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.255481  255215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-437183 && echo "embed-certs-437183" | sudo tee /etc/hostname
	I0817 22:24:32.412247  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-437183
	
	I0817 22:24:32.412284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.415194  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415508  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.415561  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415666  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.415910  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416113  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416297  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.416501  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.417004  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.417024  255215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-437183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-437183/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-437183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:32.559200  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:32.559253  255215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:32.559282  255215 buildroot.go:174] setting up certificates
	I0817 22:24:32.559299  255215 provision.go:83] configureAuth start
	I0817 22:24:32.559313  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.559696  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.562469  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.562960  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.562989  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.563141  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.565760  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566120  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.566178  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566344  255215 provision.go:138] copyHostCerts
	I0817 22:24:32.566427  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:32.566443  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:32.566504  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:32.566633  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:32.566642  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:32.566676  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:32.566730  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:32.566738  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:32.566755  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:32.566803  255215 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-437183 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube embed-certs-437183]
	I0817 22:24:31.437386  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.530148826s)
	I0817 22:24:31.437453  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0817 22:24:31.437478  255057 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:31.437578  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:32.398228  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0817 22:24:32.398294  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:32.398359  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:33.487487  255491 start.go:369] acquired machines lock for "default-k8s-diff-port-321287" in 4m16.661569765s
	I0817 22:24:33.487552  255491 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:33.487569  255491 fix.go:54] fixHost starting: 
	I0817 22:24:33.488059  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:33.488104  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:33.506430  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0817 22:24:33.506958  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:33.507587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:24:33.507618  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:33.508078  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:33.508296  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:33.508471  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:24:33.510492  255491 fix.go:102] recreateIfNeeded on default-k8s-diff-port-321287: state=Stopped err=<nil>
	I0817 22:24:33.510539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	W0817 22:24:33.510738  255491 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:33.512965  255491 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-321287" ...
	I0817 22:24:32.687763  255215 provision.go:172] copyRemoteCerts
	I0817 22:24:32.687835  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:32.687864  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.690614  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.690921  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.690963  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.691253  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.691469  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.691631  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.691745  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:32.788388  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:32.811861  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:32.835407  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0817 22:24:32.858542  255215 provision.go:86] duration metric: configureAuth took 299.225654ms
	I0817 22:24:32.858581  255215 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:32.858850  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:32.858989  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.861726  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862140  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.862186  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862436  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.862717  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.862961  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.863135  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.863321  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.863744  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.863762  255215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:33.202904  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:33.202942  255215 machine.go:91] provisioned docker machine in 952.11856ms
	I0817 22:24:33.202986  255215 start.go:300] post-start starting for "embed-certs-437183" (driver="kvm2")
	I0817 22:24:33.203002  255215 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:33.203039  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.203427  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:33.203465  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.206544  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.206969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.207004  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.207154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.207407  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.207591  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.207747  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.304648  255215 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:33.309404  255215 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:33.309435  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:33.309536  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:33.309635  255215 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:33.309752  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:33.318682  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:33.343830  255215 start.go:303] post-start completed in 140.8201ms
	I0817 22:24:33.343870  255215 fix.go:56] fixHost completed within 19.220571855s
	I0817 22:24:33.343901  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.347196  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347625  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.347658  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347927  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.348154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348336  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348487  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.348741  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:33.349346  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:33.349361  255215 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:33.487290  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311073.433845199
	
	I0817 22:24:33.487319  255215 fix.go:206] guest clock: 1692311073.433845199
	I0817 22:24:33.487331  255215 fix.go:219] Guest: 2023-08-17 22:24:33.433845199 +0000 UTC Remote: 2023-08-17 22:24:33.343875474 +0000 UTC m=+290.714391364 (delta=89.969725ms)
	I0817 22:24:33.487370  255215 fix.go:190] guest clock delta is within tolerance: 89.969725ms
	I0817 22:24:33.487378  255215 start.go:83] releasing machines lock for "embed-certs-437183", held for 19.364124776s
	I0817 22:24:33.487412  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.487714  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:33.490444  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.490945  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.490975  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.491191  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492024  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492278  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492378  255215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:33.492440  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.492569  255215 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:33.492600  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.495461  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495742  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495836  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.495879  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.496130  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496147  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496287  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496341  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496445  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496604  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496605  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496792  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.496886  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.634234  255215 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:33.642529  255215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:33.802107  255215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:33.808439  255215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:33.808520  255215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:33.823947  255215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:33.823975  255215 start.go:466] detecting cgroup driver to use...
	I0817 22:24:33.824058  255215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:33.839665  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:33.854435  255215 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:33.854512  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:33.871530  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:33.886466  255215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:34.017312  255215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:34.152720  255215 docker.go:212] disabling docker service ...
	I0817 22:24:34.152811  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:34.170506  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:34.186072  255215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:34.327678  255215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:34.450774  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:34.468330  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:34.491610  255215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:34.491684  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.506266  255215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:34.506360  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.517471  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.531351  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.542363  255215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:34.553383  255215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:34.562937  255215 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:34.563029  255215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:34.575978  255215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:34.588500  255215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:34.715821  255215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:34.912771  255215 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:34.912853  255215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:34.918377  255215 start.go:534] Will wait 60s for crictl version
	I0817 22:24:34.918445  255215 ssh_runner.go:195] Run: which crictl
	I0817 22:24:34.922462  255215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:34.962654  255215 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:34.962754  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.020574  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.078516  255215 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:33.514448  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Start
	I0817 22:24:33.514667  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring networks are active...
	I0817 22:24:33.515504  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network default is active
	I0817 22:24:33.515973  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network mk-default-k8s-diff-port-321287 is active
	I0817 22:24:33.516607  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Getting domain xml...
	I0817 22:24:33.517407  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Creating domain...
	I0817 22:24:35.032992  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting to get IP...
	I0817 22:24:35.034213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034833  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.034747  256286 retry.go:31] will retry after 255.561446ms: waiting for machine to come up
	I0817 22:24:35.292497  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293071  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293110  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.293035  256286 retry.go:31] will retry after 265.433217ms: waiting for machine to come up
	I0817 22:24:35.560591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561221  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.561138  256286 retry.go:31] will retry after 429.726379ms: waiting for machine to come up
	I0817 22:24:35.993046  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993573  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.993482  256286 retry.go:31] will retry after 583.273043ms: waiting for machine to come up
	I0817 22:24:36.578452  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578943  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578983  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:36.578889  256286 retry.go:31] will retry after 504.577651ms: waiting for machine to come up
	I0817 22:24:35.080561  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:35.083955  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084338  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:35.084376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084624  255215 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:35.088994  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:35.104758  255215 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:35.104814  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:35.140529  255215 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:35.140606  255215 ssh_runner.go:195] Run: which lz4
	I0817 22:24:35.144869  255215 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:35.149131  255215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:35.149168  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:24:37.067793  255215 crio.go:444] Took 1.922962 seconds to copy over tarball
	I0817 22:24:37.067867  255215 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:34.276465  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (1.878070898s)
	I0817 22:24:34.276495  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1 from cache
	I0817 22:24:34.276528  255057 cache_images.go:123] Successfully loaded all cached images
	I0817 22:24:34.276535  255057 cache_images.go:92] LoadImages completed in 18.659123421s
	I0817 22:24:34.276651  255057 ssh_runner.go:195] Run: crio config
	I0817 22:24:34.349440  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:34.349470  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:34.349525  255057 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:34.349559  255057 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8443 KubernetesVersion:v1.28.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-525875 NodeName:no-preload-525875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:34.349737  255057 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-525875"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:34.349852  255057 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-525875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:34.349927  255057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0-rc.1
	I0817 22:24:34.361082  255057 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:34.361211  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:34.370571  255057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0817 22:24:34.390596  255057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 22:24:34.409602  255057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0817 22:24:34.431076  255057 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:34.435869  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:34.448753  255057 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875 for IP: 192.168.61.196
	I0817 22:24:34.448854  255057 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:34.449077  255057 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:34.449125  255057 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:34.449229  255057 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/client.key
	I0817 22:24:34.449287  255057 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key.0d67e2f2
	I0817 22:24:34.449320  255057 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key
	I0817 22:24:34.449438  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:34.449466  255057 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:34.449476  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:34.449499  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:34.449523  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:34.449545  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:34.449586  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:34.450600  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:34.481454  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:24:34.514638  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:34.539306  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:24:34.565390  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:34.595648  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:34.628105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:34.654925  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:34.684138  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:34.709433  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:34.736933  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:34.772217  255057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:34.790940  255057 ssh_runner.go:195] Run: openssl version
	I0817 22:24:34.800419  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:34.811545  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819623  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819697  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.825793  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:34.836531  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:34.847239  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852331  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852394  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.861659  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:34.871817  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:34.883257  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889654  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889728  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.897773  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:34.909259  255057 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:34.914775  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:34.921549  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:34.928370  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:34.934849  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:34.941470  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:34.949932  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:34.956863  255057 kubeadm.go:404] StartCluster: {Name:no-preload-525875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525
875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:34.957036  255057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:34.957123  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:35.005195  255057 cri.go:89] found id: ""
	I0817 22:24:35.005282  255057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:35.015727  255057 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:35.015754  255057 kubeadm.go:636] restartCluster start
	I0817 22:24:35.015821  255057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:35.025333  255057 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.026796  255057 kubeconfig.go:92] found "no-preload-525875" server: "https://192.168.61.196:8443"
	I0817 22:24:35.030361  255057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:35.040698  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.040754  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.055650  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.055675  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.055719  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.066812  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.567215  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.567291  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.580471  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.066958  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.067035  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.081758  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.567234  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.567320  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.582474  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.066970  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.067060  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.079066  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.567780  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.567887  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.583652  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.085672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086184  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086222  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.086130  256286 retry.go:31] will retry after 660.028004ms: waiting for machine to come up
	I0817 22:24:37.747563  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748056  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748086  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.748020  256286 retry.go:31] will retry after 798.952498ms: waiting for machine to come up
	I0817 22:24:38.548762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549243  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549276  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:38.549193  256286 retry.go:31] will retry after 1.15249289s: waiting for machine to come up
	I0817 22:24:39.703164  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703739  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703773  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:39.703675  256286 retry.go:31] will retry after 1.300284471s: waiting for machine to come up
	I0817 22:24:41.006289  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006781  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006814  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:41.006717  256286 retry.go:31] will retry after 1.500753962s: waiting for machine to come up
	I0817 22:24:40.155737  255215 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.087825588s)
	I0817 22:24:40.155771  255215 crio.go:451] Took 3.087946 seconds to extract the tarball
	I0817 22:24:40.155784  255215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:24:40.196940  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:40.238837  255215 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:24:40.238863  255215 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:24:40.238934  255215 ssh_runner.go:195] Run: crio config
	I0817 22:24:40.302526  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:24:40.302552  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:40.302572  255215 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:40.302593  255215 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-437183 NodeName:embed-certs-437183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:40.302793  255215 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-437183"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:40.302860  255215 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-437183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:40.302914  255215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:24:40.312428  255215 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:40.312517  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:40.321824  255215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0817 22:24:40.340069  255215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:24:40.358609  255215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0817 22:24:40.376546  255215 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:40.380576  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:40.394264  255215 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183 for IP: 192.168.39.186
	I0817 22:24:40.394310  255215 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:40.394509  255215 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:40.394569  255215 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:40.394678  255215 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/client.key
	I0817 22:24:40.394749  255215 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key.d0691019
	I0817 22:24:40.394810  255215 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key
	I0817 22:24:40.394956  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:40.394999  255215 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:40.395013  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:40.395056  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:40.395096  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:40.395127  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:40.395197  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:40.396122  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:40.421809  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:24:40.447412  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:40.472678  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:24:40.501303  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:40.528016  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:40.553741  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:40.581792  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:40.609270  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:40.634901  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:40.659698  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:40.685767  255215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:40.704114  255215 ssh_runner.go:195] Run: openssl version
	I0817 22:24:40.709921  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:40.720035  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725167  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725232  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.731054  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:40.741277  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:40.751649  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757538  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757621  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.763574  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:40.773786  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:40.784152  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790448  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790529  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.796689  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:40.806968  255215 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:40.811858  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:40.818172  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:40.824439  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:40.830588  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:40.836734  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:40.842857  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:40.849072  255215 kubeadm.go:404] StartCluster: {Name:embed-certs-437183 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:40.849208  255215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:40.849269  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:40.882040  255215 cri.go:89] found id: ""
	I0817 22:24:40.882132  255215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:40.893833  255215 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:40.893859  255215 kubeadm.go:636] restartCluster start
	I0817 22:24:40.893926  255215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:40.906498  255215 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.907768  255215 kubeconfig.go:92] found "embed-certs-437183" server: "https://192.168.39.186:8443"
	I0817 22:24:40.910282  255215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:40.921945  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.922021  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.933335  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.933360  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.933417  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.944168  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.444996  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.445109  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.457502  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.944752  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.944881  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.960929  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.444350  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.444464  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.461555  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.066927  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.067043  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.082831  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.567259  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.567347  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.581544  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.067112  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.067211  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.078859  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.566916  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.567075  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.582637  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.067188  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.067286  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.082771  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.567236  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.567331  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.583192  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.067806  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.067953  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.082962  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.567559  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.567664  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.582761  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.067267  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.067357  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.078631  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.567181  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.567299  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.583270  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.509044  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509662  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509688  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:42.509599  256286 retry.go:31] will retry after 2.726859315s: waiting for machine to come up
	I0817 22:24:45.239162  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239727  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239756  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:45.239667  256286 retry.go:31] will retry after 2.868820101s: waiting for machine to come up
	I0817 22:24:42.944983  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.945083  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.960949  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.444415  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.444541  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.460157  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.944659  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.944757  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.960506  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.444408  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.444544  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.460666  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.944252  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.944358  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.956137  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.444667  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.444779  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.460524  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.944710  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.945003  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.961038  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.444556  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.444684  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.459345  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.944760  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.944858  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.961217  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:47.444786  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.444935  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.460748  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.067683  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.067794  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.083038  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.567750  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.567850  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.579427  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.066928  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.067014  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.078671  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.567463  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.567559  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.579377  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.041151  255057 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:45.041202  255057 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:45.041218  255057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:45.041279  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:45.080480  255057 cri.go:89] found id: ""
	I0817 22:24:45.080569  255057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:45.096518  255057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:45.107778  255057 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:45.107880  255057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117115  255057 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117151  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.269517  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.790366  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.988106  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.124121  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.219342  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:46.219438  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.241849  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.795050  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.295314  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.795361  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.111566  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112173  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:48.112079  256286 retry.go:31] will retry after 3.129130141s: waiting for machine to come up
	I0817 22:24:51.245244  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245759  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245788  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:51.245707  256286 retry.go:31] will retry after 4.573749963s: waiting for machine to come up
	I0817 22:24:47.944303  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.944406  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.960613  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.445144  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.445245  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.460221  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.944726  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.944811  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.958575  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.444744  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.444875  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.460348  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.944986  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.945117  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.958396  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.445013  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:50.445110  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:50.459941  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.922423  255215 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:50.922493  255215 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:50.922513  255215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:50.922581  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:50.964064  255215 cri.go:89] found id: ""
	I0817 22:24:50.964154  255215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:50.980513  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:50.990086  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:50.990152  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999907  255215 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999935  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:51.147593  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.150655  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.002996323s)
	I0817 22:24:52.150694  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.367611  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.461186  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.534447  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:52.534547  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:52.551513  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.295087  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.794596  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.817042  255057 api_server.go:72] duration metric: took 2.597699698s to wait for apiserver process to appear ...
	I0817 22:24:48.817069  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:48.817086  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.817615  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:48.817653  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.818012  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:49.318894  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.160567  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.160612  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.160627  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.246065  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.246117  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.318300  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.394871  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.394932  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:52.818493  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.825349  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.825391  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.318277  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.324705  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:53.324751  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.818240  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.823823  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:24:53.834528  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:24:53.834573  255057 api_server.go:131] duration metric: took 5.01749639s to wait for apiserver health ...
	I0817 22:24:53.834586  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:53.834596  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:53.836827  255057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:53.838602  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:24:53.850880  255057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:24:53.871556  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:24:53.886793  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:24:53.886858  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:24:53.886875  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:24:53.886889  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:24:53.886902  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:24:53.886922  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:24:53.886939  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:24:53.886948  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:24:53.886961  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:24:53.886975  255057 system_pods.go:74] duration metric: took 15.392207ms to wait for pod list to return data ...
	I0817 22:24:53.886988  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:24:53.891527  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:24:53.891589  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:24:53.891630  255057 node_conditions.go:105] duration metric: took 4.635197ms to run NodePressure ...
	I0817 22:24:53.891656  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:54.230065  255057 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239113  255057 kubeadm.go:787] kubelet initialised
	I0817 22:24:54.239146  255057 kubeadm.go:788] duration metric: took 9.048225ms waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239159  255057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:54.251454  255057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.266584  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266619  255057 pod_ready.go:81] duration metric: took 15.127554ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.266633  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266645  255057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.278901  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278932  255057 pod_ready.go:81] duration metric: took 12.266962ms waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.278944  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278952  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.297982  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298020  255057 pod_ready.go:81] duration metric: took 19.058778ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.298032  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298047  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.309929  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309967  255057 pod_ready.go:81] duration metric: took 11.898508ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.309980  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309991  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.676448  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676495  255057 pod_ready.go:81] duration metric: took 366.48994ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.676507  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676547  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.078351  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078392  255057 pod_ready.go:81] duration metric: took 401.831269ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.078405  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078416  255057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.476059  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476101  255057 pod_ready.go:81] duration metric: took 397.677369ms waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.476111  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476121  255057 pod_ready.go:38] duration metric: took 1.236947103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:55.476143  255057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:24:55.487413  255057 ops.go:34] apiserver oom_adj: -16
	I0817 22:24:55.487448  255057 kubeadm.go:640] restartCluster took 20.471686915s
	I0817 22:24:55.487459  255057 kubeadm.go:406] StartCluster complete in 20.530629906s
	I0817 22:24:55.487482  255057 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.487591  255057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:24:55.489799  255057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.490091  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:24:55.490202  255057 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:24:55.490349  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:55.490375  255057 addons.go:69] Setting storage-provisioner=true in profile "no-preload-525875"
	I0817 22:24:55.490380  255057 addons.go:69] Setting metrics-server=true in profile "no-preload-525875"
	I0817 22:24:55.490397  255057 addons.go:231] Setting addon storage-provisioner=true in "no-preload-525875"
	I0817 22:24:55.490404  255057 addons.go:231] Setting addon metrics-server=true in "no-preload-525875"
	W0817 22:24:55.490409  255057 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:24:55.490435  255057 addons.go:69] Setting default-storageclass=true in profile "no-preload-525875"
	I0817 22:24:55.490465  255057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-525875"
	I0817 22:24:55.490474  255057 host.go:66] Checking if "no-preload-525875" exists ...
	W0817 22:24:55.490413  255057 addons.go:240] addon metrics-server should already be in state true
	I0817 22:24:55.490547  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.491607  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.491742  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492181  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492232  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492255  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492291  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.503335  255057 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-525875" context rescaled to 1 replicas
	I0817 22:24:55.503399  255057 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:24:55.505836  255057 out.go:177] * Verifying Kubernetes components...
	I0817 22:24:55.507438  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:24:55.512841  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0817 22:24:55.513126  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0817 22:24:55.513241  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42607
	I0817 22:24:55.513441  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513567  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513770  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.514042  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514082  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514128  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514159  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514577  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514595  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514708  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514733  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514804  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.515081  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.515186  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515223  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.515651  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515699  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.532135  255057 addons.go:231] Setting addon default-storageclass=true in "no-preload-525875"
	W0817 22:24:55.532171  255057 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:24:55.532205  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.532614  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.532665  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.535464  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I0817 22:24:55.537205  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:24:55.537544  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.537676  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.538005  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538022  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538197  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538209  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538328  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538574  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538694  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.538757  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.540907  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.541221  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.543481  255057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:55.545233  255057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:24:55.820955  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.821534  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Found IP for machine: 192.168.50.30
	I0817 22:24:55.821557  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserving static IP address...
	I0817 22:24:55.821590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has current primary IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.822134  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.822169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | skip adding static IP to network mk-default-k8s-diff-port-321287 - found existing host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"}
	I0817 22:24:55.822189  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Getting to WaitForSSH function...
	I0817 22:24:55.822212  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserved static IP address: 192.168.50.30
	I0817 22:24:55.822225  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for SSH to be available...
	I0817 22:24:55.825198  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.825630  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825769  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH client type: external
	I0817 22:24:55.825802  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa (-rw-------)
	I0817 22:24:55.825837  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:55.825855  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | About to run SSH command:
	I0817 22:24:55.825874  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | exit 0
	I0817 22:24:55.923224  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:55.923669  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetConfigRaw
	I0817 22:24:55.924434  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:55.927453  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.927935  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.927987  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.928304  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:24:55.928581  255491 machine.go:88] provisioning docker machine ...
	I0817 22:24:55.928610  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:55.928818  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.928963  255491 buildroot.go:166] provisioning hostname "default-k8s-diff-port-321287"
	I0817 22:24:55.928984  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.929169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:55.931672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932179  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.932213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:55.932606  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.932862  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.933008  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:55.933228  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:55.933895  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:55.933917  255491 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-321287 && echo "default-k8s-diff-port-321287" | sudo tee /etc/hostname
	I0817 22:24:56.066560  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-321287
	
	I0817 22:24:56.066599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.070072  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070509  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.070590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070901  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.071175  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071377  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071589  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.071813  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.072479  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.072511  255491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-321287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-321287/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-321287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:56.210857  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:56.210897  255491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:56.210954  255491 buildroot.go:174] setting up certificates
	I0817 22:24:56.210968  255491 provision.go:83] configureAuth start
	I0817 22:24:56.210981  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:56.211435  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:56.214305  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214711  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.214762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214931  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.217766  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218200  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.218245  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218444  255491 provision.go:138] copyHostCerts
	I0817 22:24:56.218519  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:56.218533  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:56.218609  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:56.218728  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:56.218738  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:56.218769  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:56.218846  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:56.218856  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:56.218886  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:56.218953  255491 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-321287 san=[192.168.50.30 192.168.50.30 localhost 127.0.0.1 minikube default-k8s-diff-port-321287]
	I0817 22:24:56.289985  255491 provision.go:172] copyRemoteCerts
	I0817 22:24:56.290068  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:56.290104  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.293536  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.293996  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.294027  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.294218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.294456  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.294675  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.294866  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.386746  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:56.413448  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 22:24:56.438758  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 22:24:56.467489  255491 provision.go:86] duration metric: configureAuth took 256.504259ms
	I0817 22:24:56.467525  255491 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:56.467792  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:56.467917  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.470870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.471373  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471601  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.471839  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472048  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.472441  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.473139  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.473162  255491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:57.100503  254975 start.go:369] acquired machines lock for "old-k8s-version-294781" in 57.735745135s
	I0817 22:24:57.100571  254975 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:57.100583  254975 fix.go:54] fixHost starting: 
	I0817 22:24:57.101120  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:57.101172  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:57.121393  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0817 22:24:57.122017  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:57.122807  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:24:57.122834  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:57.123289  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:57.123463  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:24:57.123584  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:24:57.125545  254975 fix.go:102] recreateIfNeeded on old-k8s-version-294781: state=Stopped err=<nil>
	I0817 22:24:57.125580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	W0817 22:24:57.125759  254975 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:57.127853  254975 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-294781" ...
	I0817 22:24:55.546816  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:24:55.546839  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:24:55.546870  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.545324  255057 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.546955  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:24:55.546971  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.551364  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552354  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552580  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
	I0817 22:24:55.552920  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.552950  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553052  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.553160  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553171  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.553238  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553408  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553592  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553747  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553751  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553805  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.553823  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.553914  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553952  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554237  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.554648  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554839  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.554878  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.594781  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0817 22:24:55.595253  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.595928  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.595955  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.596358  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.596659  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.598866  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.599111  255057 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.599123  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:24:55.599141  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.602520  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.602895  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.602924  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.603114  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.603334  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.603537  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.603678  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.693508  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:24:55.693535  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:24:55.720303  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.739691  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:24:55.739725  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:24:55.752809  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.793480  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:55.793512  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:24:55.805075  255057 node_ready.go:35] waiting up to 6m0s for node "no-preload-525875" to be "Ready" ...
	I0817 22:24:55.805164  255057 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 22:24:55.834328  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:57.451781  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.731427598s)
	I0817 22:24:57.451824  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.698971636s)
	I0817 22:24:57.451845  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451859  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.451876  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451887  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452756  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.452808  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.452818  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.452832  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.452842  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452965  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453000  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453009  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453019  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453027  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453173  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453247  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453270  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453295  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453306  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453677  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453709  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453720  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.455299  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.455300  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.455325  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.564475  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.730071346s)
	I0817 22:24:57.564539  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.564551  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565087  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565160  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565170  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565185  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.565217  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565483  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565530  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565539  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565550  255057 addons.go:467] Verifying addon metrics-server=true in "no-preload-525875"
	I0817 22:24:57.569420  255057 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:24:53.063998  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:53.564081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.064081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.564321  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.064476  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.090168  255215 api_server.go:72] duration metric: took 2.555721263s to wait for apiserver process to appear ...
	I0817 22:24:55.090200  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:55.090223  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:57.571712  255057 addons.go:502] enable addons completed in 2.081503451s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:24:57.882753  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:56.835353  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:56.835388  255491 machine.go:91] provisioned docker machine in 906.787255ms
	I0817 22:24:56.835401  255491 start.go:300] post-start starting for "default-k8s-diff-port-321287" (driver="kvm2")
	I0817 22:24:56.835415  255491 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:56.835460  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:56.835881  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:56.835925  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.838868  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839240  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.839274  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839366  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.839581  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.839808  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.839994  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.932979  255491 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:56.937642  255491 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:56.937675  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:56.937770  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:56.937877  255491 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:56.938003  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:56.949478  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:56.975557  255491 start.go:303] post-start completed in 140.136722ms
	I0817 22:24:56.975589  255491 fix.go:56] fixHost completed within 23.488019817s
	I0817 22:24:56.975618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.979039  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979486  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.979549  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979673  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.979951  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980152  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980301  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.980507  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.981194  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.981211  255491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:57.100308  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311097.042275817
	
	I0817 22:24:57.100341  255491 fix.go:206] guest clock: 1692311097.042275817
	I0817 22:24:57.100351  255491 fix.go:219] Guest: 2023-08-17 22:24:57.042275817 +0000 UTC Remote: 2023-08-17 22:24:56.975593678 +0000 UTC m=+280.298176937 (delta=66.682139ms)
	I0817 22:24:57.100389  255491 fix.go:190] guest clock delta is within tolerance: 66.682139ms
	I0817 22:24:57.100396  255491 start.go:83] releasing machines lock for "default-k8s-diff-port-321287", held for 23.61286841s
	I0817 22:24:57.100436  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.100813  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:57.104312  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.104719  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.104807  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.105050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105744  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105949  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.106081  255491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:57.106133  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.106268  255491 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:57.106395  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.110145  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110531  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.110577  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.111166  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.111352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.111402  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.111567  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.112700  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.112751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.112980  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.113206  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.113379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.113534  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.200530  255491 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:57.232758  255491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:57.405574  255491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:57.413543  255491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:57.413637  255491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:57.438687  255491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:57.438718  255491 start.go:466] detecting cgroup driver to use...
	I0817 22:24:57.438808  255491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:57.458572  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:57.475320  255491 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:57.475397  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:57.493585  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:57.512274  255491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:57.650975  255491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:57.788299  255491 docker.go:212] disabling docker service ...
	I0817 22:24:57.788395  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:57.806350  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:57.819894  255491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:57.966925  255491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:58.088274  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:58.107210  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:58.129691  255491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:58.129766  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.141217  255491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:58.141388  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.153376  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.166177  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.177326  255491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:58.191627  255491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:58.203913  255491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:58.204001  255491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:58.222901  255491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:58.233280  255491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:58.366794  255491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:58.603364  255491 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:58.603462  255491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:58.616285  255491 start.go:534] Will wait 60s for crictl version
	I0817 22:24:58.616397  255491 ssh_runner.go:195] Run: which crictl
	I0817 22:24:58.622933  255491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:58.668866  255491 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:58.668961  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.735680  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.800442  255491 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:59.550327  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.550367  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:59.550385  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:59.646890  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.646928  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:00.147486  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.160700  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.160745  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:00.647077  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.685626  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.685678  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.147134  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.156042  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:01.156083  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.647569  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.657291  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:25:01.686204  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:01.686260  255215 api_server.go:131] duration metric: took 6.59605111s to wait for apiserver health ...
	I0817 22:25:01.686274  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:25:01.686283  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:01.688856  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:58.802321  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:58.806172  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.806661  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:58.806696  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.807029  255491 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:58.813045  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:58.830937  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:58.831008  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:58.880355  255491 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:58.880469  255491 ssh_runner.go:195] Run: which lz4
	I0817 22:24:58.886729  255491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:58.893418  255491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:58.893496  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:25:01.093233  255491 crio.go:444] Took 2.206544 seconds to copy over tarball
	I0817 22:25:01.093422  255491 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:57.129390  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Start
	I0817 22:24:57.134160  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring networks are active...
	I0817 22:24:57.134190  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network default is active
	I0817 22:24:57.134205  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network mk-old-k8s-version-294781 is active
	I0817 22:24:57.134214  254975 main.go:141] libmachine: (old-k8s-version-294781) Getting domain xml...
	I0817 22:24:57.134228  254975 main.go:141] libmachine: (old-k8s-version-294781) Creating domain...
	I0817 22:24:58.694125  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting to get IP...
	I0817 22:24:58.695714  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:58.696209  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:58.696356  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:58.696219  256493 retry.go:31] will retry after 307.640559ms: waiting for machine to come up
	I0817 22:24:59.006214  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.008497  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.008536  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.006931  256493 retry.go:31] will retry after 316.904618ms: waiting for machine to come up
	I0817 22:24:59.325929  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.326634  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.326672  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.326593  256493 retry.go:31] will retry after 466.068046ms: waiting for machine to come up
	I0817 22:24:59.794718  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.795268  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.795294  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.795200  256493 retry.go:31] will retry after 399.064857ms: waiting for machine to come up
	I0817 22:25:00.196015  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.196733  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.196760  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.196632  256493 retry.go:31] will retry after 553.183294ms: waiting for machine to come up
	I0817 22:25:00.751687  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.752341  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.752366  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.752283  256493 retry.go:31] will retry after 815.149471ms: waiting for machine to come up
	I0817 22:25:01.568847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:01.569679  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:01.569709  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:01.569547  256493 retry.go:31] will retry after 827.38414ms: waiting for machine to come up
	I0817 22:25:01.690788  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:01.726335  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:01.804837  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:01.844074  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:01.844121  255215 system_pods.go:61] "coredns-5d78c9869d-twvdv" [f8305fa5-f0e7-4090-af8f-a9eefe00be65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:01.844134  255215 system_pods.go:61] "etcd-embed-certs-437183" [409212ae-25eb-4221-b380-d73562531eb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:01.844143  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [a378c1e7-c439-427f-b56e-7aeb2397dda2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:01.844149  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [7d8c33ff-f8bd-4ca8-a1cd-7e03a3c1ea55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:01.844156  255215 system_pods.go:61] "kube-proxy-tqlkl" [3dc68d59-da16-4a8e-8664-24c280769e22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:01.844162  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [54addcee-6a78-4a9d-9b15-a02e79ac92be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:01.844169  255215 system_pods.go:61] "metrics-server-74d5c6b9c-h5tt6" [6f8a838b-81d8-444d-aba1-fe46fefe8815] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:01.844175  255215 system_pods.go:61] "storage-provisioner" [65cd2cbe-dcb1-4842-af27-551c8d0a93d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:01.844182  255215 system_pods.go:74] duration metric: took 39.323312ms to wait for pod list to return data ...
	I0817 22:25:01.844194  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:01.857431  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:01.857471  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:01.857485  255215 node_conditions.go:105] duration metric: took 13.285661ms to run NodePressure ...
	I0817 22:25:01.857511  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:02.318085  255215 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329089  255215 kubeadm.go:787] kubelet initialised
	I0817 22:25:02.329122  255215 kubeadm.go:788] duration metric: took 10.998414ms waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329133  255215 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.338233  255215 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:59.891549  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.386499  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.889146  255057 node_ready.go:49] node "no-preload-525875" has status "Ready":"True"
	I0817 22:25:02.889193  255057 node_ready.go:38] duration metric: took 7.084075756s waiting for node "no-preload-525875" to be "Ready" ...
	I0817 22:25:02.889209  255057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.915138  255057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926622  255057 pod_ready.go:92] pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:02.926662  255057 pod_ready.go:81] duration metric: took 11.479543ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926677  255057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.597215  255491 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.503742232s)
	I0817 22:25:04.597254  255491 crio.go:451] Took 3.503924 seconds to extract the tarball
	I0817 22:25:04.597269  255491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:04.640799  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:04.683452  255491 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:25:04.683478  255491 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:25:04.683564  255491 ssh_runner.go:195] Run: crio config
	I0817 22:25:04.755546  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:04.755579  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:04.755618  255491 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:04.755646  255491 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8444 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-321287 NodeName:default-k8s-diff-port-321287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:25:04.755865  255491 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-321287"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:04.755964  255491 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-321287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0817 22:25:04.756040  255491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:25:04.768800  255491 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:04.768884  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:04.779179  255491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0817 22:25:04.798848  255491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:04.818088  255491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0817 22:25:04.839021  255491 ssh_runner.go:195] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:04.843996  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:04.858954  255491 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287 for IP: 192.168.50.30
	I0817 22:25:04.858992  255491 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:04.859193  255491 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:04.859263  255491 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:04.859371  255491 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/client.key
	I0817 22:25:04.859452  255491 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key.2a920f45
	I0817 22:25:04.859519  255491 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key
	I0817 22:25:04.859673  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:04.859717  255491 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:04.859733  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:04.859766  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:04.859800  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:04.859839  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:04.859901  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:04.860739  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:04.893191  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:25:04.923817  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:04.953192  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:25:04.985353  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:05.015743  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:05.043565  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:05.072283  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:05.102360  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:05.131090  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:05.158164  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:05.183921  255491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:05.201231  255491 ssh_runner.go:195] Run: openssl version
	I0817 22:25:05.207477  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:05.218696  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224473  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224551  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.230753  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:05.244810  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:05.255480  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.260972  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.261054  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.267724  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:05.280466  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:05.291975  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298403  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298519  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.306541  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:05.318878  255491 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:05.324755  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:05.333167  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:05.341869  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:05.350173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:05.357173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:05.364289  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:05.372301  255491 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-
k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:05.372435  255491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:05.372493  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:05.409127  255491 cri.go:89] found id: ""
	I0817 22:25:05.409211  255491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:05.420288  255491 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:05.420316  255491 kubeadm.go:636] restartCluster start
	I0817 22:25:05.420401  255491 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:05.431336  255491 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.433035  255491 kubeconfig.go:92] found "default-k8s-diff-port-321287" server: "https://192.168.50.30:8444"
	I0817 22:25:05.437153  255491 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:05.446894  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.446956  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.459319  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.459353  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.459412  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.472543  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.973294  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.973386  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.986474  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.473007  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.473141  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.485870  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:02.398531  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:02.399142  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:02.399174  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:02.399045  256493 retry.go:31] will retry after 1.143040413s: waiting for machine to come up
	I0817 22:25:03.543421  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:03.544040  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:03.544076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:03.543971  256493 retry.go:31] will retry after 1.654291601s: waiting for machine to come up
	I0817 22:25:05.200880  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:05.201405  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:05.201435  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:05.201350  256493 retry.go:31] will retry after 1.752048888s: waiting for machine to come up
	I0817 22:25:04.379203  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.872822  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:04.499009  255057 pod_ready.go:92] pod "etcd-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.499040  255057 pod_ready.go:81] duration metric: took 1.572354603s waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.499057  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761691  255057 pod_ready.go:92] pod "kube-apiserver-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.761719  255057 pod_ready.go:81] duration metric: took 262.653075ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761734  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769937  255057 pod_ready.go:92] pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.769968  255057 pod_ready.go:81] duration metric: took 8.225874ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769983  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881406  255057 pod_ready.go:92] pod "kube-proxy-pzpk2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.881444  255057 pod_ready.go:81] duration metric: took 111.452654ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881461  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643623  255057 pod_ready.go:92] pod "kube-scheduler-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:05.643648  255057 pod_ready.go:81] duration metric: took 762.178998ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643658  255057 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:07.695130  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.972803  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.972898  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.985259  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.473416  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.473551  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.485378  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.973567  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.973708  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.989454  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.472762  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.472894  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.489910  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.972732  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.972822  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.984958  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.473569  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.473709  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.490412  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.972908  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.972987  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.986072  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.473333  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.473429  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.485656  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.973314  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.973423  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.989391  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:11.472953  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.473077  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.485192  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.956350  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:06.956874  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:06.956904  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:06.956830  256493 retry.go:31] will retry after 2.09338178s: waiting for machine to come up
	I0817 22:25:09.052006  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:09.052516  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:09.052549  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:09.052447  256493 retry.go:31] will retry after 3.023234706s: waiting for machine to come up
	I0817 22:25:08.877674  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:09.370723  255215 pod_ready.go:92] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:09.370754  255215 pod_ready.go:81] duration metric: took 7.032445075s waiting for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:09.370767  255215 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893038  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:10.893076  255215 pod_ready.go:81] duration metric: took 1.522300039s waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893091  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918300  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:11.918330  255215 pod_ready.go:81] duration metric: took 1.025229003s waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918347  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.192198  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:12.692398  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:11.973001  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.973083  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.984794  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.473426  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.473527  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.489566  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.972736  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.972840  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.984972  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.473572  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.473665  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.485760  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.972804  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.972952  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.984788  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.473423  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.473501  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.484892  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.973394  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.973481  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.985492  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:15.447933  255491 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:15.447967  255491 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:15.447983  255491 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:15.448044  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:15.483471  255491 cri.go:89] found id: ""
	I0817 22:25:15.483596  255491 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:15.500292  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:15.510630  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:15.510695  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520738  255491 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520771  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:15.635683  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:12.079485  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:12.080041  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:12.080069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:12.079986  256493 retry.go:31] will retry after 4.097355523s: waiting for machine to come up
	I0817 22:25:16.178550  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:16.179032  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:16.179063  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:16.178988  256493 retry.go:31] will retry after 4.178327275s: waiting for machine to come up
	I0817 22:25:14.176089  255215 pod_ready.go:102] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:14.679850  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.679881  255215 pod_ready.go:81] duration metric: took 2.761525031s waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.679894  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685308  255215 pod_ready.go:92] pod "kube-proxy-tqlkl" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.685339  255215 pod_ready.go:81] duration metric: took 5.435708ms waiting for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685352  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967073  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.967099  255215 pod_ready.go:81] duration metric: took 281.740411ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967110  255215 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:17.277033  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:15.190295  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:17.193522  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:16.723896  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0881723s)
	I0817 22:25:16.723933  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:16.940953  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.025208  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.110784  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:17.110880  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.123610  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.645363  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.145697  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.645211  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.145515  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.645764  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.665892  255491 api_server.go:72] duration metric: took 2.555110324s to wait for apiserver process to appear ...
	I0817 22:25:19.665920  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:19.665938  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:20.359726  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360375  254975 main.go:141] libmachine: (old-k8s-version-294781) Found IP for machine: 192.168.72.56
	I0817 22:25:20.360408  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserving static IP address...
	I0817 22:25:20.360426  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has current primary IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360798  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserved static IP address: 192.168.72.56
	I0817 22:25:20.360843  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.360866  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting for SSH to be available...
	I0817 22:25:20.360898  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | skip adding static IP to network mk-old-k8s-version-294781 - found existing host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"}
	I0817 22:25:20.360918  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Getting to WaitForSSH function...
	I0817 22:25:20.363319  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.363721  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.363767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.364016  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH client type: external
	I0817 22:25:20.364069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa (-rw-------)
	I0817 22:25:20.364115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:25:20.364135  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | About to run SSH command:
	I0817 22:25:20.364175  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | exit 0
	I0817 22:25:20.454327  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | SSH cmd err, output: <nil>: 
	I0817 22:25:20.454772  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetConfigRaw
	I0817 22:25:20.455585  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.458846  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.459420  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459910  254975 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/config.json ...
	I0817 22:25:20.460207  254975 machine.go:88] provisioning docker machine ...
	I0817 22:25:20.460240  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:20.460489  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460712  254975 buildroot.go:166] provisioning hostname "old-k8s-version-294781"
	I0817 22:25:20.460743  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460912  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.463811  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464166  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.464216  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464391  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.464610  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464779  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464936  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.465157  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.465566  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.465578  254975 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-294781 && echo "old-k8s-version-294781" | sudo tee /etc/hostname
	I0817 22:25:20.604184  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-294781
	
	I0817 22:25:20.604223  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.607313  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.607668  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.607706  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.608091  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.608335  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608511  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608656  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.608845  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.609344  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.609368  254975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-294781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-294781/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-294781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:25:20.731574  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:25:20.731639  254975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:25:20.731679  254975 buildroot.go:174] setting up certificates
	I0817 22:25:20.731697  254975 provision.go:83] configureAuth start
	I0817 22:25:20.731717  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.732057  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.735344  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.735748  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.735780  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.736038  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.738896  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739346  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.739384  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739562  254975 provision.go:138] copyHostCerts
	I0817 22:25:20.739634  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:25:20.739650  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:25:20.739733  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:25:20.739875  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:25:20.739889  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:25:20.739921  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:25:20.740027  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:25:20.740040  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:25:20.740069  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:25:20.740159  254975 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-294781 san=[192.168.72.56 192.168.72.56 localhost 127.0.0.1 minikube old-k8s-version-294781]
	I0817 22:25:20.937408  254975 provision.go:172] copyRemoteCerts
	I0817 22:25:20.937480  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:25:20.937508  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.940609  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941074  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.941115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941294  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.941469  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.941678  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.941899  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.033976  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:25:21.062438  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 22:25:21.090325  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:25:21.116263  254975 provision.go:86] duration metric: configureAuth took 384.54455ms
	I0817 22:25:21.116295  254975 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:25:21.116550  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:25:21.116667  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.119767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120295  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.120351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.120735  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.120898  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.121114  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.121330  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.121982  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.122011  254975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:25:21.449644  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:25:21.449675  254975 machine.go:91] provisioned docker machine in 989.449203ms
	I0817 22:25:21.449686  254975 start.go:300] post-start starting for "old-k8s-version-294781" (driver="kvm2")
	I0817 22:25:21.449696  254975 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:25:21.449713  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.450065  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:25:21.450112  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.453436  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.453847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.453893  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.454092  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.454320  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.454501  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.454682  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.544501  254975 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:25:21.549102  254975 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:25:21.549128  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:25:21.549201  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:25:21.549301  254975 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:25:21.549425  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:25:21.559169  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:21.585459  254975 start.go:303] post-start completed in 135.754284ms
	I0817 22:25:21.585496  254975 fix.go:56] fixHost completed within 24.48491231s
	I0817 22:25:21.585531  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.588650  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589045  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.589076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589236  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.589445  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589638  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589810  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.590026  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.590596  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.590621  254975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:25:21.704138  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311121.622295369
	
	I0817 22:25:21.704162  254975 fix.go:206] guest clock: 1692311121.622295369
	I0817 22:25:21.704170  254975 fix.go:219] Guest: 2023-08-17 22:25:21.622295369 +0000 UTC Remote: 2023-08-17 22:25:21.585502401 +0000 UTC m=+364.810906249 (delta=36.792968ms)
	I0817 22:25:21.704193  254975 fix.go:190] guest clock delta is within tolerance: 36.792968ms
	I0817 22:25:21.704200  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 24.603659499s
	I0817 22:25:21.704228  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.704524  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:21.707198  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707512  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.707555  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707715  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708285  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708516  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708605  254975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:25:21.708670  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.708790  254975 ssh_runner.go:195] Run: cat /version.json
	I0817 22:25:21.708816  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.711462  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711744  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711858  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.711906  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712090  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712154  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.712219  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712326  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712347  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712539  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712541  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712749  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712766  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.712936  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:19.775731  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.777036  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:19.693695  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:22.189616  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.818518  254975 ssh_runner.go:195] Run: systemctl --version
	I0817 22:25:21.824498  254975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:25:21.971461  254975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:25:21.978188  254975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:25:21.978271  254975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:25:21.993704  254975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:25:21.993738  254975 start.go:466] detecting cgroup driver to use...
	I0817 22:25:21.993820  254975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:25:22.009074  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:25:22.022874  254975 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:25:22.022935  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:25:22.036508  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:25:22.050919  254975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:25:22.174894  254975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:25:22.307776  254975 docker.go:212] disabling docker service ...
	I0817 22:25:22.307863  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:25:22.322017  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:25:22.334550  254975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:25:22.439721  254975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:25:22.554591  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:25:22.570460  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:25:22.588685  254975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0817 22:25:22.588767  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.599716  254975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:25:22.599801  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.611990  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.623873  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.636093  254975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:25:22.647438  254975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:25:22.657266  254975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:25:22.657338  254975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:25:22.672463  254975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:25:22.683508  254975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:25:22.799912  254975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:25:22.995704  254975 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:25:22.995816  254975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:25:23.003199  254975 start.go:534] Will wait 60s for crictl version
	I0817 22:25:23.003280  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:23.008350  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:25:23.042651  254975 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:25:23.042763  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.093624  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.142140  254975 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0817 22:25:24.666188  255491 api_server.go:269] stopped: https://192.168.50.30:8444/healthz: Get "https://192.168.50.30:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:24.666264  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:24.903729  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:24.903775  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:25.404125  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.420215  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.420261  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:25.903943  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.914463  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.914514  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:26.403966  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:26.414021  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:25:26.437708  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:26.437750  255491 api_server.go:131] duration metric: took 6.771821605s to wait for apiserver health ...
	I0817 22:25:26.437779  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:26.437789  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:26.440095  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:26.441921  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:26.469640  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:26.514785  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:26.532553  255491 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:26.532616  255491 system_pods.go:61] "coredns-5d78c9869d-v74x9" [1c42e9be-16fa-47c2-ab04-9ec805320760] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:26.532631  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [a3655572-9d89-4ef6-85db-85dc454d1021] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:26.532659  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [6786ac16-78df-4909-8542-0952af5beff6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:26.532675  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [ac8085d0-db9c-4229-b816-4753b7cfcae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:26.532686  255491 system_pods.go:61] "kube-proxy-4d9dx" [22447888-6570-47b7-baac-a5842688de9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:26.532697  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [bfcfc726-e659-4cb9-ad36-9887ddfaf170] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:26.532713  255491 system_pods.go:61] "metrics-server-74d5c6b9c-25l6w" [205dcf88-9d10-416b-8fd0-c93939208c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:26.532722  255491 system_pods.go:61] "storage-provisioner" [be486251-ebb9-4d0b-85c9-fe04e76634e3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:26.532738  255491 system_pods.go:74] duration metric: took 17.92531ms to wait for pod list to return data ...
	I0817 22:25:26.532751  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:26.541133  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:26.541180  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:26.541197  255491 node_conditions.go:105] duration metric: took 8.431415ms to run NodePressure ...
	I0817 22:25:26.541228  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:23.143729  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:23.146678  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147145  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:23.147178  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147433  254975 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0817 22:25:23.151860  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:23.165714  254975 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 22:25:23.165805  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:23.207234  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:23.207334  254975 ssh_runner.go:195] Run: which lz4
	I0817 22:25:23.211497  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:25:23.216272  254975 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:25:23.216309  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0817 22:25:25.170164  254975 crio.go:444] Took 1.958697 seconds to copy over tarball
	I0817 22:25:25.170253  254975 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:25:23.792764  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.276276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:24.193719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.692837  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.873863  255491 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:26.878982  255491 kubeadm.go:787] kubelet initialised
	I0817 22:25:26.879005  255491 kubeadm.go:788] duration metric: took 5.10797ms waiting for restarted kubelet to initialise ...
	I0817 22:25:26.879014  255491 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:26.885772  255491 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:29.448692  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:28.464409  254975 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.294096057s)
	I0817 22:25:28.464448  254975 crio.go:451] Took 3.294247 seconds to extract the tarball
	I0817 22:25:28.464461  254975 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:28.505546  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:28.550245  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:28.550282  254975 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:25:28.550393  254975 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.550419  254975 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.550425  254975 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.550466  254975 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.550416  254975 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.550388  254975 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.550543  254975 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0817 22:25:28.550382  254975 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551670  254975 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551673  254975 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.551765  254975 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.551779  254975 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.551793  254975 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0817 22:25:28.551814  254975 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.551841  254975 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.552852  254975 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.736900  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.746950  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.747215  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.749256  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.754813  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0817 22:25:28.767639  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.778459  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.834796  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.845176  254975 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0817 22:25:28.845233  254975 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.845295  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.896784  254975 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0817 22:25:28.896843  254975 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.896901  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919129  254975 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0817 22:25:28.919247  254975 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.919192  254975 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0817 22:25:28.919301  254975 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.919320  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919332  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972779  254975 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0817 22:25:28.972831  254975 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0817 22:25:28.972863  254975 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0817 22:25:28.972898  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972901  254975 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.973013  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.986909  254975 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0817 22:25:28.986957  254975 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.987007  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:29.083047  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:29.083137  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:29.083204  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:29.083276  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0817 22:25:29.083227  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0817 22:25:29.083354  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:29.083408  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:29.214678  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0817 22:25:29.214743  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0817 22:25:29.214777  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0817 22:25:29.214847  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0817 22:25:29.214934  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.221086  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0817 22:25:29.221101  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0817 22:25:29.221162  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0817 22:25:29.223655  254975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0817 22:25:29.223684  254975 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.223753  254975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0817 22:25:30.774685  254975 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550895846s)
	I0817 22:25:30.774722  254975 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0817 22:25:30.774776  254975 cache_images.go:92] LoadImages completed in 2.224475745s
	W0817 22:25:30.774942  254975 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0817 22:25:30.775051  254975 ssh_runner.go:195] Run: crio config
	I0817 22:25:30.840592  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:30.840623  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:30.840650  254975 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:30.840680  254975 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.56 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-294781 NodeName:old-k8s-version-294781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0817 22:25:30.840917  254975 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-294781"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-294781
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.56:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:30.841030  254975 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-294781 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:25:30.841111  254975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0817 22:25:30.850719  254975 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:30.850818  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:30.862807  254975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0817 22:25:30.882111  254975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:30.900496  254975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0817 22:25:30.921163  254975 ssh_runner.go:195] Run: grep 192.168.72.56	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:30.925789  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:30.941284  254975 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781 for IP: 192.168.72.56
	I0817 22:25:30.941335  254975 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:30.941556  254975 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:30.941617  254975 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:30.941728  254975 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/client.key
	I0817 22:25:30.941792  254975 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key.aa8f9bd0
	I0817 22:25:30.941827  254975 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key
	I0817 22:25:30.941948  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:30.941994  254975 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:30.942005  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:30.942039  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:30.942107  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:30.942141  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:30.942200  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:30.942953  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:30.973814  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:25:31.003939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:31.035137  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:25:31.063172  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:31.092059  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:31.120881  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:31.148113  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:31.175102  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:31.204939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:31.231548  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:31.263908  254975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:31.287143  254975 ssh_runner.go:195] Run: openssl version
	I0817 22:25:31.293380  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:31.307058  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313520  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313597  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.321182  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:31.332412  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:31.343318  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.348972  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.349044  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.355568  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:31.366257  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:31.376489  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382818  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382919  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.390171  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:31.400360  254975 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:31.406177  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:31.413881  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:31.422198  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:31.429468  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:31.437072  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:31.444150  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:31.450952  254975 kubeadm.go:404] StartCluster: {Name:old-k8s-version-294781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version
-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:31.451064  254975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:31.451140  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:31.489009  254975 cri.go:89] found id: ""
	I0817 22:25:31.489098  254975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:31.499098  254975 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:31.499126  254975 kubeadm.go:636] restartCluster start
	I0817 22:25:31.499191  254975 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:31.510909  254975 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.512049  254975 kubeconfig.go:92] found "old-k8s-version-294781" server: "https://192.168.72.56:8443"
	I0817 22:25:31.514634  254975 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:31.525968  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.526039  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.539397  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.539423  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.539485  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.552492  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:28.276789  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:30.406349  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:29.190524  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.195732  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.919929  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.415784  255491 pod_ready.go:92] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:32.415817  255491 pod_ready.go:81] duration metric: took 5.530013816s waiting for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:32.415840  255491 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:34.435177  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.435405  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.053512  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.053604  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.065409  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.553555  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.553647  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.566402  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.052703  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.052785  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.069027  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.552583  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.552724  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.566692  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.053418  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.053493  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.065794  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.553389  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.553490  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.566130  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.052663  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.052753  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.065276  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.553446  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.553544  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.567754  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.053326  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.053407  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.066562  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.553098  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.553200  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.564869  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.777224  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:35.273781  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.276847  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:33.690890  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.190746  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.435673  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.435712  255491 pod_ready.go:81] duration metric: took 5.019858859s waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.435724  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441582  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.441602  255491 pod_ready.go:81] duration metric: took 5.870633ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441614  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448615  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.448643  255491 pod_ready.go:81] duration metric: took 7.021551ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448656  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454742  255491 pod_ready.go:92] pod "kube-proxy-4d9dx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.454768  255491 pod_ready.go:81] duration metric: took 6.104572ms waiting for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454780  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462598  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.462623  255491 pod_ready.go:81] duration metric: took 7.834341ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462637  255491 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:39.741207  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.053213  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.053363  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.065752  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:37.553604  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.553709  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.569278  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.052848  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.052956  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.065011  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.552809  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.552915  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.564702  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.053287  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.053378  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.065004  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.553557  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.553654  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.565776  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.053269  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.053352  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.065089  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.552595  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.552718  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.564921  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.053531  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:41.053617  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:41.065803  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.526724  254975 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:41.526774  254975 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:41.526788  254975 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:41.526858  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:41.560831  254975 cri.go:89] found id: ""
	I0817 22:25:41.560931  254975 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:41.577926  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:41.587081  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:41.587169  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596656  254975 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596690  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:41.716908  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:39.776178  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.275946  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:38.193834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:40.691324  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.692667  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:41.745307  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:44.242440  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.243469  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.840419  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.123468828s)
	I0817 22:25:42.840454  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.062568  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.150374  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.265948  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:43.266043  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.284133  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.804512  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.304041  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.803961  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.828050  254975 api_server.go:72] duration metric: took 1.562100837s to wait for apiserver process to appear ...
	I0817 22:25:44.828085  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:44.828102  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.828570  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:44.828611  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.829005  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:45.329868  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.276477  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.775206  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:45.189460  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:47.690349  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:48.741121  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.742231  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.330553  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:50.330619  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.714219  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.714253  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:51.714268  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.756012  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.756052  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:49.276427  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.775567  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:49.698834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:52.190711  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.829442  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.888999  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:51.889031  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.329747  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.337398  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.337432  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.829817  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.839157  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.839187  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:53.329580  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:53.336858  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:25:53.347151  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:25:53.347191  254975 api_server.go:131] duration metric: took 8.519097199s to wait for apiserver health ...
	I0817 22:25:53.347204  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:53.347212  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:53.349243  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:52.743242  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:55.241261  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:53.350976  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:53.364808  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:53.397606  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:53.411868  254975 system_pods.go:59] 7 kube-system pods found
	I0817 22:25:53.411903  254975 system_pods.go:61] "coredns-5644d7b6d9-nz5d2" [5514f434-2c17-42dc-b35b-fef5bd6886fb] Running
	I0817 22:25:53.411909  254975 system_pods.go:61] "etcd-old-k8s-version-294781" [75919c29-02ae-46f6-8173-507b491d16da] Running
	I0817 22:25:53.411920  254975 system_pods.go:61] "kube-apiserver-old-k8s-version-294781" [f6d458ca-a84f-40dc-8b6a-b53fb8062c50] Running
	I0817 22:25:53.411930  254975 system_pods.go:61] "kube-controller-manager-old-k8s-version-294781" [0827f676-c11c-44b1-9bca-f8f905448490] Pending
	I0817 22:25:53.411937  254975 system_pods.go:61] "kube-proxy-f2bdh" [8b0dfe14-026a-44e1-9c6f-7f16fb61f90e] Running
	I0817 22:25:53.411943  254975 system_pods.go:61] "kube-scheduler-old-k8s-version-294781" [9ced2a30-44a8-421f-94ef-19be20b58c5d] Running
	I0817 22:25:53.411947  254975 system_pods.go:61] "storage-provisioner" [c9c05cca-5426-4071-a408-815c723a76f3] Running
	I0817 22:25:53.411954  254975 system_pods.go:74] duration metric: took 14.318728ms to wait for pod list to return data ...
	I0817 22:25:53.411961  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:53.415672  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:53.415715  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:53.415731  254975 node_conditions.go:105] duration metric: took 3.76549ms to run NodePressure ...
	I0817 22:25:53.415758  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:53.808911  254975 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:53.814276  254975 retry.go:31] will retry after 200.301174ms: kubelet not initialised
	I0817 22:25:54.020423  254975 retry.go:31] will retry after 376.047728ms: kubelet not initialised
	I0817 22:25:54.401967  254975 retry.go:31] will retry after 672.586884ms: kubelet not initialised
	I0817 22:25:55.079229  254975 retry.go:31] will retry after 1.101994757s: kubelet not initialised
	I0817 22:25:56.186236  254975 retry.go:31] will retry after 770.380926ms: kubelet not initialised
	I0817 22:25:53.777865  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.275799  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:54.690880  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.189416  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.242279  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.742604  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.961679  254975 retry.go:31] will retry after 2.235217601s: kubelet not initialised
	I0817 22:25:59.205012  254975 retry.go:31] will retry after 2.063266757s: kubelet not initialised
	I0817 22:26:01.275712  254975 retry.go:31] will retry after 5.105867057s: kubelet not initialised
	I0817 22:25:58.774815  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.275856  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.190180  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.692286  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.744707  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.240683  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.388158  254975 retry.go:31] will retry after 3.608427827s: kubelet not initialised
	I0817 22:26:03.775281  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.274839  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.190713  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.689980  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.742399  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.742739  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.004038  254975 retry.go:31] will retry after 8.940252852s: kubelet not initialised
	I0817 22:26:08.275499  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.275871  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.696436  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:11.189718  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.240363  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.241894  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:12.776238  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.274945  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.690119  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:16.189786  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:17.741982  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:20.242289  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.951040  254975 retry.go:31] will retry after 14.553103306s: kubelet not initialised
	I0817 22:26:17.774269  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:19.775075  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.274390  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.690720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:21.191013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.242355  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.742592  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.275310  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:26.774906  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:23.690032  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:25.690127  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.692342  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.243421  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:29.245714  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:28.777378  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.274134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:30.189730  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:32.689849  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.741791  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.240900  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:36.241988  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:33.521718  254975 kubeadm.go:787] kubelet initialised
	I0817 22:26:33.521745  254975 kubeadm.go:788] duration metric: took 39.712803989s waiting for restarted kubelet to initialise ...
	I0817 22:26:33.521755  254975 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:26:33.535522  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545447  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.545474  254975 pod_ready.go:81] duration metric: took 9.918514ms waiting for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545487  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551823  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.551853  254975 pod_ready.go:81] duration metric: took 6.357251ms waiting for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551867  254975 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559246  254975 pod_ready.go:92] pod "etcd-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.559278  254975 pod_ready.go:81] duration metric: took 7.402957ms waiting for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559291  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565344  254975 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.565373  254975 pod_ready.go:81] duration metric: took 6.072723ms waiting for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565387  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909036  254975 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.909073  254975 pod_ready.go:81] duration metric: took 343.677116ms waiting for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909089  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308592  254975 pod_ready.go:92] pod "kube-proxy-f2bdh" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.308619  254975 pod_ready.go:81] duration metric: took 399.522419ms waiting for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308630  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708489  254975 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.708517  254975 pod_ready.go:81] duration metric: took 399.879822ms waiting for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708528  254975 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.275646  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:35.774730  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.692013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.191914  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.242929  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.741450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.516268  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.275712  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.774133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.690461  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:41.690828  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.242204  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.741216  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:42.016209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.516019  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.275668  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.776837  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.189846  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:46.691439  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.742285  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.241123  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.016817  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.517406  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:48.276244  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.774977  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.189105  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:51.190270  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.241800  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.739978  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.016631  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.515565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.516890  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.274258  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.278000  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.192619  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.693990  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.742737  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.241115  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.241654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.015461  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.017347  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:57.775264  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.775399  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.776382  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:58.190121  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:00.190792  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:02.697428  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.741654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.742940  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.516565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.516966  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:04.275212  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:06.277355  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.190366  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:07.190973  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.244485  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.741985  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.015202  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.016691  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.774384  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.774729  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:09.692011  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.190853  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.742313  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:15.241577  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.514881  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.516950  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.517383  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.774867  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.775482  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.274793  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.689813  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.692012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.243159  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.517518  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.016576  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.275829  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.276653  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.692315  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.189564  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:22.240740  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:24.241960  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.242201  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.017348  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.515756  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.775957  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.275937  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.189646  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.690338  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.690947  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.741912  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.742165  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.516071  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.517838  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.276630  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.775134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.691012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:31.696187  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:33.241142  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:35.243536  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.017452  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.515974  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.516450  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.775448  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.775822  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.274968  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.188369  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.188928  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.741436  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.741983  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.015982  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.516526  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.278879  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.774782  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:38.189378  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:40.695851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:42.240995  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.741178  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.015737  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.018254  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.776276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.276133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.188678  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:45.189618  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:47.191825  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.741669  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.241194  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.242571  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.516687  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.016735  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.277486  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:50.775420  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.689852  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.691216  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.741209  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.743232  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.518209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.016075  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.275443  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.774204  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.692276  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.190072  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.242009  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:00.242183  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.516449  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.016290  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:57.775327  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:59.775642  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.275827  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.691467  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.189998  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.740875  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.742481  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.523305  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.016025  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.275917  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.777604  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.190940  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:05.690559  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.693124  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.241721  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.241889  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:08.017490  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.018815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.274176  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.275009  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.190851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.689465  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.741056  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.241846  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:16.243898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.516550  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.017547  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:13.276368  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.773960  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.690587  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.189824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:18.742657  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.243561  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.515978  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:20.016035  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.774474  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.776240  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.275209  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.194335  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.691142  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:23.743251  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.241450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.021055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.516645  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.776861  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.274029  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.189740  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.691801  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:28.242364  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:30.740610  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.016851  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.017289  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.517096  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.774126  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.275287  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.189744  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.691190  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.741643  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:35.242108  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.015792  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.016247  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.773849  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.777072  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:33.692774  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.189115  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:37.741756  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.244685  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.016815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.017616  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:39.276756  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:41.774190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.190001  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.690824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.742547  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.241354  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.518073  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.016560  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.776627  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:46.275092  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.189166  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.692178  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.697772  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.242829  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.741555  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.516429  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.516588  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:48.775347  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:51.274069  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:50.191415  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.694362  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.242367  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.742705  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.019113  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.516748  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:53.275190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.773511  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.189720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.189811  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.241152  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.242170  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.015866  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.016464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.515901  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.776667  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:00.273941  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.190719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.190988  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.741107  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.742524  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.243093  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.516444  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.017964  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:02.775583  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.280071  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.690586  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.643882  255057 pod_ready.go:81] duration metric: took 4m0.000182343s waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:05.643921  255057 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:05.643932  255057 pod_ready.go:38] duration metric: took 4m2.754707603s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:05.643956  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:29:05.643998  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:05.644060  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:05.703194  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:05.703221  255057 cri.go:89] found id: ""
	I0817 22:29:05.703229  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:05.703283  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.708602  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:05.708676  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:05.747581  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:05.747610  255057 cri.go:89] found id: ""
	I0817 22:29:05.747619  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:05.747692  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.753231  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:05.753331  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:05.795460  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:05.795489  255057 cri.go:89] found id: ""
	I0817 22:29:05.795499  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:05.795562  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.801181  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:05.801268  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:05.840433  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:05.840463  255057 cri.go:89] found id: ""
	I0817 22:29:05.840472  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:05.840546  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.845974  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:05.846039  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:05.886216  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:05.886243  255057 cri.go:89] found id: ""
	I0817 22:29:05.886252  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:05.886314  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.891204  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:05.891286  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:05.927636  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:05.927661  255057 cri.go:89] found id: ""
	I0817 22:29:05.927669  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:05.927732  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.932173  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:05.932230  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:05.963603  255057 cri.go:89] found id: ""
	I0817 22:29:05.963634  255057 logs.go:284] 0 containers: []
	W0817 22:29:05.963646  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:05.963654  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:05.963727  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:05.996465  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:05.996489  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:05.996496  255057 cri.go:89] found id: ""
	I0817 22:29:05.996505  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:05.996572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.001291  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.006314  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:06.006348  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:06.051348  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:06.051386  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:06.226315  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:06.226362  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:06.263289  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:06.263321  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:06.308223  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:06.308262  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:06.346964  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:06.347001  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:06.382834  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:06.382878  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:06.431491  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:06.431527  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:06.485901  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:06.485948  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:07.054256  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:07.054315  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:07.093229  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093417  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093570  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093737  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.119377  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:07.119420  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:07.137712  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:07.137756  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:07.187463  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:07.187511  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:07.252728  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252775  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:07.252844  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:07.252856  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252865  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252872  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252878  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.252884  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252890  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:08.741270  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:11.245029  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:08.516388  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:10.518542  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:07.775391  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:09.775841  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:12.276748  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.741788  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:16.242264  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.018983  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:15.516221  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.774832  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.967926  255215 pod_ready.go:81] duration metric: took 4m0.000797383s waiting for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:14.967968  255215 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:14.967995  255215 pod_ready.go:38] duration metric: took 4m12.638851973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:14.968025  255215 kubeadm.go:640] restartCluster took 4m34.07416066s
	W0817 22:29:14.968112  255215 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:14.968150  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:17.254245  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:29:17.278452  255057 api_server.go:72] duration metric: took 4m21.775005609s to wait for apiserver process to appear ...
	I0817 22:29:17.278488  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:29:17.278540  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:17.278675  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:17.317529  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:17.317554  255057 cri.go:89] found id: ""
	I0817 22:29:17.317562  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:17.317626  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.323505  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:17.323593  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:17.367258  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.367282  255057 cri.go:89] found id: ""
	I0817 22:29:17.367290  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:17.367355  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.372332  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:17.372424  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:17.406884  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:17.406914  255057 cri.go:89] found id: ""
	I0817 22:29:17.406923  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:17.406990  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.411562  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:17.411626  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:17.452516  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.452549  255057 cri.go:89] found id: ""
	I0817 22:29:17.452560  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:17.452654  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.458237  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:17.458327  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:17.498524  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:17.498550  255057 cri.go:89] found id: ""
	I0817 22:29:17.498559  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:17.498621  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.504941  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:17.505024  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:17.543542  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.543570  255057 cri.go:89] found id: ""
	I0817 22:29:17.543580  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:17.543646  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.548420  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:17.548488  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:17.589411  255057 cri.go:89] found id: ""
	I0817 22:29:17.589441  255057 logs.go:284] 0 containers: []
	W0817 22:29:17.589449  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:17.589455  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:17.589520  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:17.624044  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:17.624075  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.624083  255057 cri.go:89] found id: ""
	I0817 22:29:17.624092  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:17.624160  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.631040  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.635336  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:17.635359  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:17.688966  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689294  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689576  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689899  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:17.729861  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:17.729923  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:17.746619  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:17.746663  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.805149  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:17.805198  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.842639  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:17.842673  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.905357  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:17.905406  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.943860  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:17.943893  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:18.242331  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:20.742262  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:17.517585  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:19.519464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:18.114000  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:18.114038  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:18.176549  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:18.176602  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:18.211903  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:18.211947  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:18.246566  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:18.246600  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:18.280810  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:18.280853  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:18.831902  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:18.831957  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:18.883170  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883219  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:18.883304  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:18.883323  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883336  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883352  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883364  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:18.883382  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883391  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:23.242587  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:25.742126  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:22.017269  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:24.017806  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:26.516458  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.241489  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:30.741723  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.516703  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:31.016367  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.884252  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:29:28.889957  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:29:28.891532  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:29:28.891560  255057 api_server.go:131] duration metric: took 11.613062869s to wait for apiserver health ...
	I0817 22:29:28.891571  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:29:28.891602  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:28.891669  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:28.927462  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:28.927496  255057 cri.go:89] found id: ""
	I0817 22:29:28.927506  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:28.927572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.932195  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:28.932284  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:28.974041  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:28.974092  255057 cri.go:89] found id: ""
	I0817 22:29:28.974103  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:28.974172  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.978230  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:28.978302  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:29.012431  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.012459  255057 cri.go:89] found id: ""
	I0817 22:29:29.012469  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:29.012539  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.017232  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:29.017311  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:29.051208  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.051235  255057 cri.go:89] found id: ""
	I0817 22:29:29.051242  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:29.051292  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.056125  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:29.056193  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:29.094165  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.094196  255057 cri.go:89] found id: ""
	I0817 22:29:29.094207  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:29.094277  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.098992  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:29.099054  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:29.138522  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.138552  255057 cri.go:89] found id: ""
	I0817 22:29:29.138561  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:29.138614  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.143075  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:29.143159  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:29.177797  255057 cri.go:89] found id: ""
	I0817 22:29:29.177831  255057 logs.go:284] 0 containers: []
	W0817 22:29:29.177842  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:29.177850  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:29.177916  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:29.208897  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.208922  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.208928  255057 cri.go:89] found id: ""
	I0817 22:29:29.208937  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:29.209008  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.213083  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.217020  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:29.217043  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:29.253559  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253779  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253989  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.254225  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:29.280705  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:29.280746  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:29.295400  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:29.295432  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:29.344222  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:29.344268  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:29.482768  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:29.482812  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:29.541274  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:29.541317  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.577842  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:29.577876  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.613556  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:29.613595  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.654840  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:29.654886  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.711929  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:29.711974  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.749746  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:29.749802  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.782899  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:29.782932  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:30.286425  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:30.286488  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:30.328588  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328616  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:30.328686  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:30.328701  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328715  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328729  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328745  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:30.328754  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328762  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:32.741952  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.241640  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:33.516723  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.516887  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.339646  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:29:40.339676  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.339681  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.339685  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.339690  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.339694  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.339698  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.339705  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.339711  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.339722  255057 system_pods.go:74] duration metric: took 11.448139171s to wait for pod list to return data ...
	I0817 22:29:40.339730  255057 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:29:40.344246  255057 default_sa.go:45] found service account: "default"
	I0817 22:29:40.344271  255057 default_sa.go:55] duration metric: took 4.534553ms for default service account to be created ...
	I0817 22:29:40.344280  255057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:29:40.353485  255057 system_pods.go:86] 8 kube-system pods found
	I0817 22:29:40.353521  255057 system_pods.go:89] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.353529  255057 system_pods.go:89] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.353537  255057 system_pods.go:89] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.353546  255057 system_pods.go:89] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.353553  255057 system_pods.go:89] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.353560  255057 system_pods.go:89] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.353579  255057 system_pods.go:89] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.353589  255057 system_pods.go:89] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.353598  255057 system_pods.go:126] duration metric: took 9.313259ms to wait for k8s-apps to be running ...
	I0817 22:29:40.353612  255057 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:29:40.353685  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:40.376714  255057 system_svc.go:56] duration metric: took 23.088082ms WaitForService to wait for kubelet.
	I0817 22:29:40.376759  255057 kubeadm.go:581] duration metric: took 4m44.873323742s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:29:40.377191  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:29:40.385016  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:29:40.385043  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:29:40.385055  255057 node_conditions.go:105] duration metric: took 7.857619ms to run NodePressure ...
	I0817 22:29:40.385068  255057 start.go:228] waiting for startup goroutines ...
	I0817 22:29:40.385074  255057 start.go:233] waiting for cluster config update ...
	I0817 22:29:40.385085  255057 start.go:242] writing updated cluster config ...
	I0817 22:29:40.385411  255057 ssh_runner.go:195] Run: rm -f paused
	I0817 22:29:40.457420  255057 start.go:600] kubectl: 1.28.0, cluster: 1.28.0-rc.1 (minor skew: 0)
	I0817 22:29:40.460043  255057 out.go:177] * Done! kubectl is now configured to use "no-preload-525875" cluster and "default" namespace by default
	I0817 22:29:37.242898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:37.462917  255491 pod_ready.go:81] duration metric: took 4m0.00026087s waiting for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:37.462956  255491 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:37.463009  255491 pod_ready.go:38] duration metric: took 4m10.583985022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:37.463050  255491 kubeadm.go:640] restartCluster took 4m32.042723788s
	W0817 22:29:37.463141  255491 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:37.463185  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:37.517852  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.016790  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:42.517001  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:45.016757  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:47.291163  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.322979002s)
	I0817 22:29:47.291246  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:47.305948  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:29:47.316036  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:29:47.325470  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:29:47.325519  255215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:29:47.566297  255215 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:29:47.017112  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:49.017246  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:51.018095  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:53.519020  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:56.016627  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.087786  255215 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:29:59.087860  255215 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:29:59.087991  255215 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:29:59.088169  255215 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:29:59.088306  255215 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:29:59.088388  255215 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:29:59.090358  255215 out.go:204]   - Generating certificates and keys ...
	I0817 22:29:59.090460  255215 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:29:59.090547  255215 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:29:59.090660  255215 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:29:59.090766  255215 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:29:59.090886  255215 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:29:59.090976  255215 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:29:59.091060  255215 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:29:59.091152  255215 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:29:59.091250  255215 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:29:59.091350  255215 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:29:59.091435  255215 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:29:59.091514  255215 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:29:59.091589  255215 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:29:59.091655  255215 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:29:59.091759  255215 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:29:59.091836  255215 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:29:59.091960  255215 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:29:59.092070  255215 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:29:59.092127  255215 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:29:59.092207  255215 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:29:59.094268  255215 out.go:204]   - Booting up control plane ...
	I0817 22:29:59.094408  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:29:59.094513  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:29:59.094594  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:29:59.094719  255215 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:29:59.094944  255215 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:29:59.095031  255215 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504676 seconds
	I0817 22:29:59.095206  255215 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:29:59.095401  255215 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:29:59.095494  255215 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:29:59.095757  255215 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-437183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:29:59.095844  255215 kubeadm.go:322] [bootstrap-token] Using token: 0fftkt.nm31ryo8p4990tdr
	I0817 22:29:59.097581  255215 out.go:204]   - Configuring RBAC rules ...
	I0817 22:29:59.097750  255215 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:29:59.097884  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:29:59.098097  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:29:59.098258  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:29:59.098405  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:29:59.098510  255215 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:29:59.098679  255215 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:29:59.098745  255215 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:29:59.098802  255215 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:29:59.098811  255215 kubeadm.go:322] 
	I0817 22:29:59.098889  255215 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:29:59.098898  255215 kubeadm.go:322] 
	I0817 22:29:59.099010  255215 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:29:59.099033  255215 kubeadm.go:322] 
	I0817 22:29:59.099069  255215 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:29:59.099142  255215 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:29:59.099221  255215 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:29:59.099232  255215 kubeadm.go:322] 
	I0817 22:29:59.099297  255215 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:29:59.099307  255215 kubeadm.go:322] 
	I0817 22:29:59.099365  255215 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:29:59.099374  255215 kubeadm.go:322] 
	I0817 22:29:59.099446  255215 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:29:59.099552  255215 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:29:59.099660  255215 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:29:59.099670  255215 kubeadm.go:322] 
	I0817 22:29:59.099799  255215 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:29:59.099909  255215 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:29:59.099917  255215 kubeadm.go:322] 
	I0817 22:29:59.100037  255215 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100173  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:29:59.100205  255215 kubeadm.go:322] 	--control-plane 
	I0817 22:29:59.100218  255215 kubeadm.go:322] 
	I0817 22:29:59.100348  255215 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:29:59.100359  255215 kubeadm.go:322] 
	I0817 22:29:59.100472  255215 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100610  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:29:59.100639  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:29:59.100650  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:29:59.102534  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:29:58.017949  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:00.519619  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.104107  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:29:59.128756  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:29:59.172002  255215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=embed-certs-437183 minikube.k8s.io/updated_at=2023_08_17T22_29_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.717974  255215 ops.go:34] apiserver oom_adj: -16
	I0817 22:29:59.718154  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.815994  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.419198  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.919196  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.419096  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.919517  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:02.419076  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.017120  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:05.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:02.919289  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.419268  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.919021  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.418663  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.919015  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.419325  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.919309  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.418701  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.919301  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.418670  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.919445  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.419363  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.918988  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.418788  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.918948  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.418731  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.919293  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.419374  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.578800  255215 kubeadm.go:1081] duration metric: took 12.40679081s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:11.578850  255215 kubeadm.go:406] StartCluster complete in 5m30.729798213s
	I0817 22:30:11.578877  255215 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.578990  255215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:11.581741  255215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.582107  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:11.582305  255215 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:11.582414  255215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-437183"
	I0817 22:30:11.582435  255215 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-437183"
	I0817 22:30:11.582433  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:11.582436  255215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-437183"
	I0817 22:30:11.582449  255215 addons.go:69] Setting metrics-server=true in profile "embed-certs-437183"
	I0817 22:30:11.582461  255215 addons.go:231] Setting addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:11.582465  255215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-437183"
	W0817 22:30:11.582467  255215 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:11.582521  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	W0817 22:30:11.582443  255215 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:11.582609  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.582956  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582976  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582992  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583000  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583326  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.583361  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.600606  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0817 22:30:11.601162  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.601890  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.601918  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.602386  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.603044  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.603110  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.603922  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0817 22:30:11.604193  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36125
	I0817 22:30:11.604476  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.604711  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.605320  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605342  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605474  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605500  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605874  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.605927  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.606184  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.606616  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.606654  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.622026  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0817 22:30:11.622822  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.623522  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.623556  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.624021  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.624332  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.626478  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.629171  255215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:11.627845  255215 addons.go:231] Setting addon default-storageclass=true in "embed-certs-437183"
	W0817 22:30:11.629212  255215 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:11.629267  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.628437  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0817 22:30:11.629683  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.631294  255215 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.631295  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.629905  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.631315  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:11.631339  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.632333  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.632356  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.632860  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.633085  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.635520  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.635727  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.638116  255215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:09.776936  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.313725935s)
	I0817 22:30:09.777008  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:09.794808  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:09.806086  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:09.818495  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:09.818547  255491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:30:10.061316  255491 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:30:11.636353  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.636644  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.640483  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.640486  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:11.640508  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:11.640535  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.640703  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.640905  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.641073  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.645685  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646351  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.646376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646867  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.647096  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.647286  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.647444  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.655819  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0817 22:30:11.656540  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.657308  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.657326  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.657864  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.658485  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.658520  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.679610  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0817 22:30:11.680268  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.680977  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.681013  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.681485  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.681722  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.683711  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.686274  255215 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.686297  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:11.686323  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.692154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.692160  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692245  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.692288  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692447  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.692691  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.692899  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.742259  255215 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-437183" context rescaled to 1 replicas
	I0817 22:30:11.742317  255215 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:11.744647  255215 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:07.516999  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:10.016647  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:11.746674  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:11.833127  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.853282  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:11.853316  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:11.858219  255215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.858353  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:11.889330  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.896554  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:11.896595  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:11.906260  255215 node_ready.go:49] node "embed-certs-437183" has status "Ready":"True"
	I0817 22:30:11.906292  255215 node_ready.go:38] duration metric: took 48.027482ms waiting for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.906305  255215 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:11.949379  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:11.949409  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:12.023543  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:12.131426  255215 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:14.420517  255215 pod_ready.go:102] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.647805  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.814629092s)
	I0817 22:30:14.647842  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.78945104s)
	I0817 22:30:14.647874  255215 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:14.647904  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.758517925s)
	I0817 22:30:14.647915  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648017  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648042  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648067  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648478  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.648532  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.648626  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.648638  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648656  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648882  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.649025  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.649050  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.649069  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.650529  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.650577  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.650586  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.650600  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.650614  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.651171  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.651230  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.652509  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652529  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.652688  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652708  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.175766  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.152137099s)
	I0817 22:30:15.175888  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.175915  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176344  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.176343  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.176428  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.176452  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.176488  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176915  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.178804  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.178827  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.178840  255215 addons.go:467] Verifying addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:15.180928  255215 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:30:12.018605  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.519226  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:15.182515  255215 addons.go:502] enable addons completed in 3.600222172s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:30:16.920634  255215 pod_ready.go:92] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.920664  255215 pod_ready.go:81] duration metric: took 4.789200515s waiting for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.920674  255215 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937440  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.937469  255215 pod_ready.go:81] duration metric: took 16.789093ms waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937483  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944411  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.944437  255215 pod_ready.go:81] duration metric: took 6.944986ms waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944451  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952239  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.952267  255215 pod_ready.go:81] duration metric: took 7.807798ms waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952281  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815597  255215 pod_ready.go:92] pod "kube-proxy-2f6jz" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:17.815630  255215 pod_ready.go:81] duration metric: took 863.340907ms waiting for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815644  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108648  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:18.108683  255215 pod_ready.go:81] duration metric: took 293.029473ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108693  255215 pod_ready.go:38] duration metric: took 6.202373203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:18.108726  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:18.108789  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:18.129379  255215 api_server.go:72] duration metric: took 6.38701969s to wait for apiserver process to appear ...
	I0817 22:30:18.129409  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:18.129425  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:30:18.138226  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:30:18.141542  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:18.141568  255215 api_server.go:131] duration metric: took 12.152138ms to wait for apiserver health ...
	I0817 22:30:18.141579  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:18.312736  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:30:18.312782  255215 system_pods.go:61] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.312790  255215 system_pods.go:61] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.312798  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.312804  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.312811  255215 system_pods.go:61] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.312817  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.312831  255215 system_pods.go:61] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.312841  255215 system_pods.go:61] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.312855  255215 system_pods.go:74] duration metric: took 171.269837ms to wait for pod list to return data ...
	I0817 22:30:18.312868  255215 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:18.511271  255215 default_sa.go:45] found service account: "default"
	I0817 22:30:18.511380  255215 default_sa.go:55] duration metric: took 198.492073ms for default service account to be created ...
	I0817 22:30:18.511401  255215 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:18.710880  255215 system_pods.go:86] 8 kube-system pods found
	I0817 22:30:18.710911  255215 system_pods.go:89] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.710917  255215 system_pods.go:89] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.710921  255215 system_pods.go:89] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.710926  255215 system_pods.go:89] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.710929  255215 system_pods.go:89] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.710933  255215 system_pods.go:89] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.710943  255215 system_pods.go:89] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.710949  255215 system_pods.go:89] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.710958  255215 system_pods.go:126] duration metric: took 199.549571ms to wait for k8s-apps to be running ...
	I0817 22:30:18.710967  255215 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:18.711013  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:18.725788  255215 system_svc.go:56] duration metric: took 14.807351ms WaitForService to wait for kubelet.
	I0817 22:30:18.725819  255215 kubeadm.go:581] duration metric: took 6.983465617s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:18.725846  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:18.908038  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:18.908079  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:18.908093  255215 node_conditions.go:105] duration metric: took 182.240177ms to run NodePressure ...
	I0817 22:30:18.908108  255215 start.go:228] waiting for startup goroutines ...
	I0817 22:30:18.908127  255215 start.go:233] waiting for cluster config update ...
	I0817 22:30:18.908142  255215 start.go:242] writing updated cluster config ...
	I0817 22:30:18.908536  255215 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:18.962718  255215 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:18.965052  255215 out.go:177] * Done! kubectl is now configured to use "embed-certs-437183" cluster and "default" namespace by default
	I0817 22:30:17.018314  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:19.517055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:21.517216  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:22.302082  255491 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:30:22.302198  255491 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:22.302316  255491 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:22.302392  255491 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:22.302537  255491 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:22.302623  255491 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:22.304947  255491 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:22.305043  255491 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:22.305112  255491 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:22.305227  255491 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:22.305295  255491 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:22.305389  255491 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:22.305466  255491 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:22.305540  255491 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:22.305614  255491 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:22.305703  255491 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:22.305801  255491 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:22.305861  255491 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:22.305956  255491 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:22.306043  255491 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:22.306141  255491 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:22.306231  255491 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:22.306313  255491 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:22.306462  255491 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:22.306597  255491 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:22.306674  255491 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:30:22.306778  255491 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:22.308372  255491 out.go:204]   - Booting up control plane ...
	I0817 22:30:22.308478  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:22.308565  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:22.308644  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:22.308735  255491 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:22.308942  255491 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:22.309046  255491 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003655 seconds
	I0817 22:30:22.309195  255491 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:22.309352  255491 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:22.309430  255491 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:22.309656  255491 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-321287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:30:22.309729  255491 kubeadm.go:322] [bootstrap-token] Using token: vtugjh.yrdml71jezyixk01
	I0817 22:30:22.311499  255491 out.go:204]   - Configuring RBAC rules ...
	I0817 22:30:22.311610  255491 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:30:22.311706  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:30:22.311887  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:30:22.312069  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:30:22.312240  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:30:22.312338  255491 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:30:22.312462  255491 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:30:22.312516  255491 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:30:22.312583  255491 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:30:22.312595  255491 kubeadm.go:322] 
	I0817 22:30:22.312680  255491 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:30:22.312693  255491 kubeadm.go:322] 
	I0817 22:30:22.312798  255491 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:30:22.312806  255491 kubeadm.go:322] 
	I0817 22:30:22.312847  255491 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:30:22.312926  255491 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:30:22.313008  255491 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:30:22.313016  255491 kubeadm.go:322] 
	I0817 22:30:22.313073  255491 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:30:22.313092  255491 kubeadm.go:322] 
	I0817 22:30:22.313135  255491 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:30:22.313141  255491 kubeadm.go:322] 
	I0817 22:30:22.313180  255491 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:30:22.313271  255491 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:30:22.313397  255491 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:30:22.313421  255491 kubeadm.go:322] 
	I0817 22:30:22.313561  255491 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:30:22.313670  255491 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:30:22.313691  255491 kubeadm.go:322] 
	I0817 22:30:22.313790  255491 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.313910  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:30:22.313930  255491 kubeadm.go:322] 	--control-plane 
	I0817 22:30:22.313933  255491 kubeadm.go:322] 
	I0817 22:30:22.314017  255491 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:30:22.314029  255491 kubeadm.go:322] 
	I0817 22:30:22.314161  255491 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.314324  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:30:22.314342  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:30:22.314352  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:30:22.316092  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:30:22.317823  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:30:22.330216  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:30:22.364427  255491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:30:22.364530  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.364541  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=default-k8s-diff-port-321287 minikube.k8s.io/updated_at=2023_08_17T22_30_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.398800  255491 ops.go:34] apiserver oom_adj: -16
	I0817 22:30:22.789239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.908906  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.507279  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.007071  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.507204  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.007980  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.507764  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.007834  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.507449  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.518185  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:26.017066  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:27.007162  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:27.507978  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.008024  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.507376  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.007583  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.507355  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.007416  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.507014  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.007539  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.507116  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.516778  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:31.016979  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:32.007363  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:32.508019  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.007624  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.507337  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.007239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.507255  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.007804  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.507323  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.647403  255491 kubeadm.go:1081] duration metric: took 13.282950211s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:35.647439  255491 kubeadm.go:406] StartCluster complete in 5m30.275148595s
	I0817 22:30:35.647465  255491 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.647562  255491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:35.649294  255491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.649625  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:35.649672  255491 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:35.649793  255491 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649815  255491 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.649827  255491 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:35.649857  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:35.649897  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.649914  255491 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649931  255491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-321287"
	I0817 22:30:35.650130  255491 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.650154  255491 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.650163  255491 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:35.650207  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.650360  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650362  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650397  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650456  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650616  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650660  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.666863  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0817 22:30:35.666883  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0817 22:30:35.667444  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.667657  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.668085  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668105  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668245  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668256  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668780  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.669523  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.669553  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.670006  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:30:35.670382  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.670448  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.670513  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.670985  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.671005  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.671824  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.672870  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.672905  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.682146  255491 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.682167  255491 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:35.682200  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.682640  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.682674  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.690436  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0817 22:30:35.691039  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.691642  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.691666  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.692056  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.692328  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.692416  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0817 22:30:35.693048  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.693566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.693588  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.693974  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.694180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.694314  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.696623  255491 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:35.696015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.698535  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:35.698555  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:35.698593  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.700284  255491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:35.702071  255491 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.702097  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:35.702127  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.703050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.703111  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.703161  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703297  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.703498  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.703605  255491 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-321287" context rescaled to 1 replicas
	I0817 22:30:35.703641  255491 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:35.706989  255491 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:35.703707  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.707227  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.707832  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40363
	I0817 22:30:35.708116  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.709223  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:35.709358  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.709408  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.709426  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.709650  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.709767  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.709979  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.710587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.710608  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.711008  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.711578  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.711631  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.730317  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35051
	I0817 22:30:35.730875  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.731566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.731595  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.731993  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.732228  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.734475  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.734778  255491 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.734799  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:35.734822  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.737878  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.738359  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738478  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.739396  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.739599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.739850  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.902960  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.913205  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.936947  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:35.936977  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:35.977717  255491 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.977876  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:35.984231  255491 node_ready.go:49] node "default-k8s-diff-port-321287" has status "Ready":"True"
	I0817 22:30:35.984286  255491 node_ready.go:38] duration metric: took 6.524258ms waiting for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.984302  255491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:36.008884  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:36.008915  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:36.010024  255491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.073572  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.073607  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:36.139665  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.382827  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.382863  255491 pod_ready.go:81] duration metric: took 372.809939ms waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.382878  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513607  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.513640  255491 pod_ready.go:81] duration metric: took 130.752675ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513653  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610942  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.610974  255491 pod_ready.go:81] duration metric: took 97.312774ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610989  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:33.017198  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:34.709633  254975 pod_ready.go:81] duration metric: took 4m0.001081095s waiting for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	E0817 22:30:34.709679  254975 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:30:34.709709  254975 pod_ready.go:38] duration metric: took 4m1.187941338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:34.709762  254975 kubeadm.go:640] restartCluster took 5m3.210628062s
	W0817 22:30:34.709854  254975 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:30:34.709895  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:30:38.629738  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.716488882s)
	I0817 22:30:38.629799  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.651889874s)
	I0817 22:30:38.629829  255491 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:38.629802  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629871  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.629753  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.726738359s)
	I0817 22:30:38.629944  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629971  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630368  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630389  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630401  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630429  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630528  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630559  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630578  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630587  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630677  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.630707  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630732  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630973  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630991  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.631004  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.631007  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.631015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.632993  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.633019  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.633033  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.758987  255491 pod_ready.go:102] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:39.084274  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.944554423s)
	I0817 22:30:39.084336  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.084785  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.084799  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:39.084817  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.084829  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084842  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.085152  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.085168  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.085179  255491 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-321287"
	I0817 22:30:39.087296  255491 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:30:39.089202  255491 addons.go:502] enable addons completed in 3.439530445s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:30:41.238328  255491 pod_ready.go:92] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.238358  255491 pod_ready.go:81] duration metric: took 4.627360634s waiting for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.238376  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.244985  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.245011  255491 pod_ready.go:81] duration metric: took 6.626883ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.245022  255491 pod_ready.go:38] duration metric: took 5.260700173s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:41.245042  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:41.245097  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:41.262899  255491 api_server.go:72] duration metric: took 5.559222986s to wait for apiserver process to appear ...
	I0817 22:30:41.262935  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:41.262957  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:30:41.268642  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:30:41.269921  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:41.269947  255491 api_server.go:131] duration metric: took 7.005146ms to wait for apiserver health ...
	I0817 22:30:41.269955  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:41.276807  255491 system_pods.go:59] 9 kube-system pods found
	I0817 22:30:41.276844  255491 system_pods.go:61] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.276855  255491 system_pods.go:61] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.276863  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.276868  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.276875  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.276883  255491 system_pods.go:61] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.276890  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.276908  255491 system_pods.go:61] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.276918  255491 system_pods.go:61] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.276929  255491 system_pods.go:74] duration metric: took 6.967523ms to wait for pod list to return data ...
	I0817 22:30:41.276941  255491 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:41.279696  255491 default_sa.go:45] found service account: "default"
	I0817 22:30:41.279724  255491 default_sa.go:55] duration metric: took 2.773544ms for default service account to be created ...
	I0817 22:30:41.279735  255491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:41.286220  255491 system_pods.go:86] 9 kube-system pods found
	I0817 22:30:41.286258  255491 system_pods.go:89] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.286269  255491 system_pods.go:89] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.286277  255491 system_pods.go:89] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.286283  255491 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.286287  255491 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.286292  255491 system_pods.go:89] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.286296  255491 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.286302  255491 system_pods.go:89] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.286306  255491 system_pods.go:89] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.286316  255491 system_pods.go:126] duration metric: took 6.576272ms to wait for k8s-apps to be running ...
	I0817 22:30:41.286326  255491 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:41.286373  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:41.301841  255491 system_svc.go:56] duration metric: took 15.499888ms WaitForService to wait for kubelet.
	I0817 22:30:41.301874  255491 kubeadm.go:581] duration metric: took 5.598205886s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:41.301898  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:41.306253  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:41.306289  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:41.306300  255491 node_conditions.go:105] duration metric: took 4.396496ms to run NodePressure ...
	I0817 22:30:41.306311  255491 start.go:228] waiting for startup goroutines ...
	I0817 22:30:41.306320  255491 start.go:233] waiting for cluster config update ...
	I0817 22:30:41.306329  255491 start.go:242] writing updated cluster config ...
	I0817 22:30:41.306617  255491 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:41.363947  255491 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:41.366167  255491 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-321287" cluster and "default" namespace by default
	I0817 22:30:47.861835  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.151914062s)
	I0817 22:30:47.861926  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:47.877704  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:47.888385  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:47.898212  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:47.898269  254975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0817 22:30:47.957871  254975 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0817 22:30:47.958020  254975 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:48.121563  254975 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:48.121724  254975 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:48.121869  254975 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:48.316212  254975 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:48.316379  254975 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:48.324040  254975 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0817 22:30:48.453946  254975 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:48.456278  254975 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:48.456403  254975 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:48.456486  254975 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:48.456629  254975 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:48.456723  254975 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:48.456831  254975 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:48.456916  254975 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:48.456992  254975 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:48.457084  254975 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:48.457233  254975 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:48.457347  254975 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:48.457400  254975 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:48.457478  254975 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:48.599977  254975 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:48.760474  254975 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:48.873066  254975 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:48.958450  254975 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:48.959335  254975 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:48.961565  254975 out.go:204]   - Booting up control plane ...
	I0817 22:30:48.961672  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:48.972854  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:48.974149  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:48.975110  254975 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:48.981334  254975 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:58.986028  254975 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004044 seconds
	I0817 22:30:58.986232  254975 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:59.005484  254975 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:59.530563  254975 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:59.530730  254975 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-294781 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0817 22:31:00.039739  254975 kubeadm.go:322] [bootstrap-token] Using token: y5v57w.cds9r5wk990e6rgq
	I0817 22:31:00.041700  254975 out.go:204]   - Configuring RBAC rules ...
	I0817 22:31:00.041831  254975 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:31:00.051302  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:31:00.056478  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:31:00.060403  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:31:00.065454  254975 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:31:00.155583  254975 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:31:00.472429  254975 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:31:00.474442  254975 kubeadm.go:322] 
	I0817 22:31:00.474512  254975 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:31:00.474554  254975 kubeadm.go:322] 
	I0817 22:31:00.474671  254975 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:31:00.474686  254975 kubeadm.go:322] 
	I0817 22:31:00.474708  254975 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:31:00.474808  254975 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:31:00.474883  254975 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:31:00.474895  254975 kubeadm.go:322] 
	I0817 22:31:00.474973  254975 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:31:00.475082  254975 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:31:00.475179  254975 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:31:00.475193  254975 kubeadm.go:322] 
	I0817 22:31:00.475308  254975 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0817 22:31:00.475421  254975 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:31:00.475431  254975 kubeadm.go:322] 
	I0817 22:31:00.475551  254975 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.475696  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:31:00.475750  254975 kubeadm.go:322]     --control-plane 	  
	I0817 22:31:00.475759  254975 kubeadm.go:322] 
	I0817 22:31:00.475881  254975 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:31:00.475937  254975 kubeadm.go:322] 
	I0817 22:31:00.476044  254975 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.476196  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:31:00.476725  254975 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:31:00.476766  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:31:00.476782  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:31:00.478932  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:31:00.480754  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:31:00.496449  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:31:00.527578  254975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:31:00.527658  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.527769  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=old-k8s-version-294781 minikube.k8s.io/updated_at=2023_08_17T22_31_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.809784  254975 ops.go:34] apiserver oom_adj: -16
	I0817 22:31:00.809925  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.991957  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:01.627311  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.126890  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.626673  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.127657  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.627284  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.127320  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.627026  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.127336  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.626721  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.127279  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.626697  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.127307  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.626920  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.127266  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.626970  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.126923  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.626808  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.127298  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.627182  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.126639  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.626681  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.127321  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.626904  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.127274  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.627272  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.127457  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.627280  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.127333  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.231130  254975 kubeadm.go:1081] duration metric: took 14.703542822s to wait for elevateKubeSystemPrivileges.
	I0817 22:31:15.231183  254975 kubeadm.go:406] StartCluster complete in 5m43.780243338s
	I0817 22:31:15.231254  254975 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.231391  254975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:31:15.233245  254975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.233533  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:31:15.233848  254975 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:31:15.233927  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:31:15.233947  254975 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-294781"
	I0817 22:31:15.233968  254975 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-294781"
	W0817 22:31:15.233977  254975 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:31:15.233983  254975 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234001  254975 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234007  254975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-294781"
	I0817 22:31:15.234021  254975 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-294781"
	W0817 22:31:15.234040  254975 addons.go:240] addon metrics-server should already be in state true
	I0817 22:31:15.234075  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234097  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234576  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234581  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234650  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.252847  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0817 22:31:15.252891  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38451
	I0817 22:31:15.253743  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.253833  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.254616  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254632  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.254713  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0817 22:31:15.254887  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254906  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.255216  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255276  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.255294  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255865  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255872  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255960  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.255977  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.256400  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.256604  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.269860  254975 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-294781"
	W0817 22:31:15.269883  254975 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:31:15.269911  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.270304  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.270335  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.273014  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0817 22:31:15.273532  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.274114  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.274134  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.274549  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.274769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.276415  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.276491  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0817 22:31:15.278935  254975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:31:15.277380  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.278041  254975 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-294781" context rescaled to 1 replicas
	I0817 22:31:15.280642  254975 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:31:15.282441  254975 out.go:177] * Verifying Kubernetes components...
	I0817 22:31:15.280856  254975 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.281832  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.284263  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.284347  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:31:15.284348  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:31:15.284366  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.285256  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.285580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.288289  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.288456  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.290643  254975 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:31:15.289601  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.289769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.292678  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:31:15.292693  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:31:15.292721  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.292776  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.293060  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.293277  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.293791  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.297193  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0817 22:31:15.297816  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.298486  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.298506  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.298962  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.299508  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.299531  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.300275  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.300994  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.301024  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.301098  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.301296  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.301502  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.301651  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.321283  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0817 22:31:15.321876  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.322943  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.322971  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.323496  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.323842  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.326563  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.326910  254975 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.326933  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:31:15.326957  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.330190  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.330947  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.330978  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.331193  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.331422  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.331552  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.331681  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.497277  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.529500  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.531359  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:31:15.531381  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:31:15.585477  254975 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.585494  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:31:15.590969  254975 node_ready.go:49] node "old-k8s-version-294781" has status "Ready":"True"
	I0817 22:31:15.591001  254975 node_ready.go:38] duration metric: took 5.470452ms waiting for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.591012  254975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:15.594026  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:31:15.594077  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:31:15.596784  254975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:15.638420  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:15.638455  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:31:15.707735  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:16.690916  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.193582768s)
	I0817 22:31:16.690987  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691002  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691002  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161462189s)
	I0817 22:31:16.691042  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105375097s)
	I0817 22:31:16.691044  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691217  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691158  254975 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0817 22:31:16.691422  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691464  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691490  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691561  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691512  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691586  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691603  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691630  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691813  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691832  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692047  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692086  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692110  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.692130  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.692114  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.692460  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692480  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828440  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.120652237s)
	I0817 22:31:16.828511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828525  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.828913  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.828939  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828952  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828963  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.829228  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.829252  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.829264  254975 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-294781"
	I0817 22:31:16.829279  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.831430  254975 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:31:16.834005  254975 addons.go:502] enable addons completed in 1.600151352s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:31:17.618673  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.110224  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.610989  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.611015  254975 pod_ready.go:81] duration metric: took 5.014205232s waiting for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.611025  254975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616618  254975 pod_ready.go:92] pod "kube-proxy-44jmp" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.616639  254975 pod_ready.go:81] duration metric: took 5.608097ms waiting for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616646  254975 pod_ready.go:38] duration metric: took 5.025620457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:20.616695  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:31:20.616748  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:31:20.633102  254975 api_server.go:72] duration metric: took 5.352419031s to wait for apiserver process to appear ...
	I0817 22:31:20.633131  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:31:20.633152  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:31:20.640585  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:31:20.641784  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:31:20.641807  254975 api_server.go:131] duration metric: took 8.66923ms to wait for apiserver health ...
	I0817 22:31:20.641815  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:31:20.647851  254975 system_pods.go:59] 4 kube-system pods found
	I0817 22:31:20.647904  254975 system_pods.go:61] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.647909  254975 system_pods.go:61] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.647917  254975 system_pods.go:61] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.647923  254975 system_pods.go:61] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.647929  254975 system_pods.go:74] duration metric: took 6.108947ms to wait for pod list to return data ...
	I0817 22:31:20.647937  254975 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:31:20.651451  254975 default_sa.go:45] found service account: "default"
	I0817 22:31:20.651485  254975 default_sa.go:55] duration metric: took 3.540013ms for default service account to be created ...
	I0817 22:31:20.651496  254975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:31:20.655529  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.655556  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.655561  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.655567  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.655575  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.655593  254975 retry.go:31] will retry after 194.203175ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:20.860033  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.860063  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.860069  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.860076  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.860082  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.860098  254975 retry.go:31] will retry after 273.217607ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.138457  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.138483  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.138488  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.138494  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.138501  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.138520  254975 retry.go:31] will retry after 311.999616ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.455473  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.455507  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.455513  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.455519  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.455526  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.455542  254975 retry.go:31] will retry after 462.378441ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.922656  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.922695  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.922703  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.922714  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.922724  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.922743  254975 retry.go:31] will retry after 595.850716ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:22.525024  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:22.525067  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:22.525076  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:22.525087  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:22.525100  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:22.525123  254975 retry.go:31] will retry after 916.880182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:23.446648  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:23.446678  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:23.446684  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:23.446691  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:23.446697  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:23.446717  254975 retry.go:31] will retry after 1.080769148s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:24.532239  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:24.532270  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:24.532277  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:24.532287  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:24.532296  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:24.532325  254975 retry.go:31] will retry after 1.261174641s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:25.798397  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:25.798430  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:25.798435  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:25.798442  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:25.798449  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:25.798465  254975 retry.go:31] will retry after 1.383083099s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:27.187782  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:27.187816  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:27.187821  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:27.187828  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:27.187834  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:27.187852  254975 retry.go:31] will retry after 1.954135672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:29.148294  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:29.148325  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:29.148330  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:29.148337  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:29.148344  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:29.148359  254975 retry.go:31] will retry after 2.632641562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:31.786946  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:31.786981  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:31.786988  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:31.786998  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:31.787010  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:31.787030  254975 retry.go:31] will retry after 3.626446493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:35.421023  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:35.421053  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:35.421059  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:35.421065  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:35.421072  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:35.421089  254975 retry.go:31] will retry after 2.800907689s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:38.228118  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:38.228155  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:38.228165  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:38.228177  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:38.228187  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:38.228204  254975 retry.go:31] will retry after 3.699626464s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:41.932868  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:41.932902  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:41.932908  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:41.932915  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:41.932922  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:41.932939  254975 retry.go:31] will retry after 6.965217948s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:48.913824  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:48.913866  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:48.913875  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:48.913899  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:48.913909  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:48.913931  254975 retry.go:31] will retry after 7.880328521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:56.800829  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:56.800868  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:56.800876  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:56.800887  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:56.800893  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:56.800915  254975 retry.go:31] will retry after 7.054585059s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:32:03.878268  254975 system_pods.go:86] 7 kube-system pods found
	I0817 22:32:03.878297  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:03.878304  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Pending
	I0817 22:32:03.878308  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Pending
	I0817 22:32:03.878311  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:03.878316  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:03.878324  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:03.878331  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:03.878351  254975 retry.go:31] will retry after 13.129481457s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0817 22:32:17.015570  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:17.015609  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:17.015619  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:17.015627  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:17.015634  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Pending
	I0817 22:32:17.015640  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:17.015647  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:17.015672  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:17.015682  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:17.015709  254975 retry.go:31] will retry after 15.332291563s: missing components: kube-controller-manager
	I0817 22:32:32.354549  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:32.354587  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:32.354596  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:32.354603  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:32.354613  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Running
	I0817 22:32:32.354619  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:32.354626  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:32.354637  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:32.354646  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:32.354657  254975 system_pods.go:126] duration metric: took 1m11.703154434s to wait for k8s-apps to be running ...
	I0817 22:32:32.354700  254975 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:32:32.354766  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:32:32.372492  254975 system_svc.go:56] duration metric: took 17.765249ms WaitForService to wait for kubelet.
	I0817 22:32:32.372541  254975 kubeadm.go:581] duration metric: took 1m17.091866023s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:32:32.372573  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:32:32.377413  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:32:32.377442  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:32:32.377455  254975 node_conditions.go:105] duration metric: took 4.875282ms to run NodePressure ...
	I0817 22:32:32.377467  254975 start.go:228] waiting for startup goroutines ...
	I0817 22:32:32.377473  254975 start.go:233] waiting for cluster config update ...
	I0817 22:32:32.377483  254975 start.go:242] writing updated cluster config ...
	I0817 22:32:32.377828  254975 ssh_runner.go:195] Run: rm -f paused
	I0817 22:32:32.433865  254975 start.go:600] kubectl: 1.28.0, cluster: 1.16.0 (minor skew: 12)
	I0817 22:32:32.436131  254975 out.go:177] 
	W0817 22:32:32.437621  254975 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0817 22:32:32.439072  254975 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0817 22:32:32.440794  254975 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-294781" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 22:24:46 UTC, ends at Thu 2023-08-17 22:39:43 UTC. --
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.142290115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb5064d3-b239-41c1-8786-6472a1af41de name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.142668779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb5064d3-b239-41c1-8786-6472a1af41de name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.143817050Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=a1cb8a38-1488-453c-9add-63583a94c193 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.143948280Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1692311440562347072,StartedAt:1692311440600828092,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/02b2bd5a-9e11-4476-81c5-fe927c4ef543/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/02b2bd5a-9e11-4476-81c5-fe927c4ef543/containers/storage-provisioner/d882dd73,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/02b2bd5a-9e11-4476-81c5-fe927c4ef543/volumes/kubernetes.io~projected/kube-api-access-5pvvr,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_02b2bd5a-9e11-4476-81c5-fe927c4ef543/storage-pro
visioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=a1cb8a38-1488-453c-9add-63583a94c193 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.144814989Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=e2c6095d-1d00-4e8f-b236-f6b9c2129c61 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.144978765Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1692311440299723855,StartedAt:1692311440368995452,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.27.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1fedb8b2-1800-4933-b964-6080cc760045/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1fedb8b2-1800-4933-b964-6080cc760045/containers/kube-proxy/922a66b4,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kubelet/pods/1fedb8b2-1800-4933-b964-6080cc760045/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kub
ernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/1fedb8b2-1800-4933-b964-6080cc760045/volumes/kubernetes.io~projected/kube-api-access-fnldq,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-proxy-k2jz7_1fedb8b2-1800-4933-b964-6080cc760045/kube-proxy/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e2c6095d-1d00-4e8f-b236-f6b9c2129c61 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.145769230Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=fff8057a-26b5-425d-aab3-f24b32e478f6 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.145903006Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1692311439786865946,StartedAt:1692311439832360247,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/44728d42-fce0-4a11-ba30-094a44b9313a/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/44728d42-fce0-4a11-ba30-094a44b9313a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/44728d42-fce0-4a11-ba30-094a44b9313a/containers/coredns/32f8c95e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,Host
Path:/var/lib/kubelet/pods/44728d42-fce0-4a11-ba30-094a44b9313a/volumes/kubernetes.io~projected/kube-api-access-6v95b,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-5d78c9869d-2gh8n_44728d42-fce0-4a11-ba30-094a44b9313a/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=fff8057a-26b5-425d-aab3-f24b32e478f6 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.146985129Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=0c73bd23-07e0-4d1e-b4b5-b72b0a60143b name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.147162189Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1692311414312492608,StartedAt:1692311415844145914,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.7-0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41aa6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/da775db6f21c1f41aa6b992356315d15/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/da775db6f21c1f41aa6b992356315d15/containers/etcd/18eb3bae,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-default-k8s-diff-port-321287_da775db6f21c1f41aa6b992356315d15/etcd/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=0c73bd23-07e0-4d1e-b4b5-b72b0a60143b name=/runtime.v1.Runtim
eService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.148688433Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=4af6b8c9-2c67-4feb-9bce-6e2b2f0f1434 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.148865059Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1692311413631688419,StartedAt:1692311414493042747,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.27.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCo
unt: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6e7412a207e7573fc22d8c2b5f5da127/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6e7412a207e7573fc22d8c2b5f5da127/containers/kube-controller-manager/f580320e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propaga
tion:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-default-k8s-diff-port-321287_6e7412a207e7573fc22d8c2b5f5da127/kube-controller-manager/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=4af6b8c9-2c67-4feb-9bce-6e2b2f0f1434 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.149984353Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=7806fa08-47eb-47a0-a64d-555953b37146 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.150110389Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1692311413567508102,StartedAt:1692311414436171398,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.27.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/73d361ab4418927a569781cffbcb19c0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/73d361ab4418927a569781cffbcb19c0/containers/kube-apiserver/897fc6c3,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-default-
k8s-diff-port-321287_73d361ab4418927a569781cffbcb19c0/kube-apiserver/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=7806fa08-47eb-47a0-a64d-555953b37146 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.151085172Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=19197c12-64bb-4206-a13e-364a26b15867 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.151192982Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1692311413495527369,StartedAt:1692311415315177556,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.27.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/04e945a80216f830497b31b89421c70e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/04e945a80216f830497b31b89421c70e/containers/kube-scheduler/055c4685,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-321287_04e945a80216f830497b31b89421c70e/kube-scheduler/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=19197c12-64bb-4206-a13e-364a26b15867 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.176265257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=80f89f13-982f-497a-8bcb-2537f268279a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.176354523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=80f89f13-982f-497a-8bcb-2537f268279a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.176693345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=80f89f13-982f-497a-8bcb-2537f268279a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.219765086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ce2839b7-9be2-470c-9924-4493c7fcc9d8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.219854565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ce2839b7-9be2-470c-9924-4493c7fcc9d8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.220101867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ce2839b7-9be2-470c-9924-4493c7fcc9d8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.259217586Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9fdf0ee9-7ad0-461e-a2f2-5d2ded6d541d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.259307938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9fdf0ee9-7ad0-461e-a2f2-5d2ded6d541d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:39:43 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:39:43.259672541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9fdf0ee9-7ad0-461e-a2f2-5d2ded6d541d name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	9fd26bcc5bfe4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d6f816a6adc86
	5bdba67d69f89       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   9 minutes ago       Running             kube-proxy                0                   f825ccdf72b0c
	7403ecb81788c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   94f102d81ed5e
	5d3f4cfe29dcc       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   9 minutes ago       Running             etcd                      2                   a5b502f330514
	0767cda0efa92       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   9 minutes ago       Running             kube-controller-manager   2                   3ad675abb698b
	6c82fbf22edcc       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   9 minutes ago       Running             kube-apiserver            2                   27d0d2a523921
	fd04443a08b3d       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   9 minutes ago       Running             kube-scheduler            2                   009245a17138c
	
	* 
	* ==> coredns [7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-321287
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-321287
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=default-k8s-diff-port-321287
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T22_30_22_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 22:30:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-321287
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 22:39:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 22:35:49 +0000   Thu, 17 Aug 2023 22:30:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 22:35:49 +0000   Thu, 17 Aug 2023 22:30:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 22:35:49 +0000   Thu, 17 Aug 2023 22:30:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 22:35:49 +0000   Thu, 17 Aug 2023 22:30:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.30
	  Hostname:    default-k8s-diff-port-321287
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b18b3c66a6b4dd9966f408987de00b0
	  System UUID:                1b18b3c6-6a6b-4dd9-966f-408987de00b0
	  Boot ID:                    deb09338-68db-4e09-8863-8f7556e89e91
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-2gh8n                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-321287                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-321287             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-321287    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-k2jz7                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-default-k8s-diff-port-321287             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 metrics-server-74d5c6b9c-lw5bp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m2s                   kube-proxy       
	  Normal  Starting                 9m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node default-k8s-diff-port-321287 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m11s                  kubelet          Node default-k8s-diff-port-321287 status is now: NodeReady
	  Normal  RegisteredNode           9m8s                   node-controller  Node default-k8s-diff-port-321287 event: Registered Node default-k8s-diff-port-321287 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug17 22:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075948] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.441486] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.530699] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148514] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.633030] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.079563] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.128876] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.183083] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.133846] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.267796] systemd-fstab-generator[710]: Ignoring "noauto" for root device
	[Aug17 22:25] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +20.819164] kauditd_printk_skb: 29 callbacks suppressed
	[Aug17 22:30] systemd-fstab-generator[3559]: Ignoring "noauto" for root device
	[ +10.386451] systemd-fstab-generator[3885]: Ignoring "noauto" for root device
	[ +27.877441] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6] <==
	* {"level":"info","ts":"2023-08-17T22:30:15.930Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"21545a69824e3d79","initial-advertise-peer-urls":["https://192.168.50.30:2380"],"listen-peer-urls":["https://192.168.50.30:2380"],"advertise-client-urls":["https://192.168.50.30:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.30:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-17T22:30:15.930Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-17T22:30:15.930Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.50.30:2380"}
	{"level":"info","ts":"2023-08-17T22:30:16.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-17T22:30:16.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-17T22:30:16.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 received MsgPreVoteResp from 21545a69824e3d79 at term 1"}
	{"level":"info","ts":"2023-08-17T22:30:16.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 became candidate at term 2"}
	{"level":"info","ts":"2023-08-17T22:30:16.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 received MsgVoteResp from 21545a69824e3d79 at term 2"}
	{"level":"info","ts":"2023-08-17T22:30:16.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 became leader at term 2"}
	{"level":"info","ts":"2023-08-17T22:30:16.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 21545a69824e3d79 elected leader 21545a69824e3d79 at term 2"}
	{"level":"info","ts":"2023-08-17T22:30:16.408Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"21545a69824e3d79","local-member-attributes":"{Name:default-k8s-diff-port-321287 ClientURLs:[https://192.168.50.30:2379]}","request-path":"/0/members/21545a69824e3d79/attributes","cluster-id":"4c46e38203538bcd","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T22:30:16.408Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T22:30:16.409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T22:30:16.409Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-17T22:30:16.409Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T22:30:16.411Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.50.30:2379"}
	{"level":"info","ts":"2023-08-17T22:30:16.408Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:30:16.413Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4c46e38203538bcd","local-member-id":"21545a69824e3d79","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:30:16.413Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-17T22:30:16.413Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:30:16.413Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-08-17T22:30:36.330Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.063098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:30:36.331Z","caller":"traceutil/trace.go:171","msg":"trace[2114995631] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:369; }","duration":"162.478584ms","start":"2023-08-17T22:30:36.169Z","end":"2023-08-17T22:30:36.331Z","steps":["trace[2114995631] 'agreement among raft nodes before linearized reading'  (duration: 114.727223ms)","trace[2114995631] 'range keys from in-memory index tree'  (duration: 42.246105ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T22:30:36.331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.028791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-321287\" ","response":"range_response_count:1 size:5758"}
	{"level":"info","ts":"2023-08-17T22:30:36.332Z","caller":"traceutil/trace.go:171","msg":"trace[1053105977] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-321287; range_end:; response_count:1; response_revision:369; }","duration":"147.084413ms","start":"2023-08-17T22:30:36.184Z","end":"2023-08-17T22:30:36.332Z","steps":["trace[1053105977] 'agreement among raft nodes before linearized reading'  (duration: 99.871031ms)","trace[1053105977] 'range keys from in-memory index tree'  (duration: 46.955829ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  22:39:43 up 15 min,  0 users,  load average: 0.16, 0.39, 0.34
	Linux default-k8s-diff-port-321287 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f] <==
	* E0817 22:35:19.305348       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:35:19.306664       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:36:18.199404       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.243.8:443: connect: connection refused
	I0817 22:36:18.199708       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:36:19.305339       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:36:19.305463       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:36:19.305471       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:36:19.307761       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:36:19.307943       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:36:19.307990       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:37:18.198725       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.243.8:443: connect: connection refused
	I0817 22:37:18.199023       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0817 22:38:18.198660       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.243.8:443: connect: connection refused
	I0817 22:38:18.198712       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:38:19.306454       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:38:19.306659       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:38:19.306769       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:38:19.308676       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:38:19.308792       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:38:19.308827       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:39:18.199616       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.243.8:443: connect: connection refused
	I0817 22:39:18.199706       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b] <==
	* W0817 22:33:35.752201       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:34:05.288193       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:34:05.763971       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:34:35.295832       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:34:35.774688       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:35:05.303048       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:35:05.786347       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:35:35.309734       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:35:35.797911       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:36:05.316508       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:36:05.809721       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:36:35.324388       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:36:35.820282       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:37:05.332835       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:37:05.832213       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:37:35.339828       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:37:35.842391       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:38:05.346090       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:38:05.853704       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:38:35.353521       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:38:35.864234       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:39:05.361060       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:39:05.875304       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:39:35.367942       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:39:35.885185       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78] <==
	* I0817 22:30:40.520516       1 node.go:141] Successfully retrieved node IP: 192.168.50.30
	I0817 22:30:40.520787       1 server_others.go:110] "Detected node IP" address="192.168.50.30"
	I0817 22:30:40.520829       1 server_others.go:554] "Using iptables proxy"
	I0817 22:30:40.629748       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0817 22:30:40.629837       1 server_others.go:192] "Using iptables Proxier"
	I0817 22:30:40.629911       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 22:30:40.631106       1 server.go:658] "Version info" version="v1.27.4"
	I0817 22:30:40.631161       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:30:40.639782       1 config.go:188] "Starting service config controller"
	I0817 22:30:40.639841       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 22:30:40.639897       1 config.go:97] "Starting endpoint slice config controller"
	I0817 22:30:40.639914       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 22:30:40.641105       1 config.go:315] "Starting node config controller"
	I0817 22:30:40.641149       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 22:30:40.739910       1 shared_informer.go:318] Caches are synced for service config
	I0817 22:30:40.740015       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 22:30:40.742681       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c] <==
	* W0817 22:30:19.260410       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 22:30:19.260507       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0817 22:30:19.273750       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:19.273848       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0817 22:30:19.317695       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 22:30:19.317795       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0817 22:30:19.330255       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 22:30:19.330332       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0817 22:30:19.342390       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 22:30:19.342452       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0817 22:30:19.492879       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:19.492933       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0817 22:30:19.529058       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 22:30:19.529113       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0817 22:30:19.567792       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:19.567867       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0817 22:30:19.699217       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 22:30:19.699273       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0817 22:30:19.749246       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 22:30:19.749392       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0817 22:30:19.786010       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 22:30:19.786098       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0817 22:30:19.794357       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 22:30:19.794455       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 22:30:21.903111       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 22:24:46 UTC, ends at Thu 2023-08-17 22:39:43 UTC. --
	Aug 17 22:36:52 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:36:52.573012    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:37:05 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:37:05.572949    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:37:20 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:37:20.573376    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:37:22 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:37:22.654926    3892 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:37:22 default-k8s-diff-port-321287 kubelet[3892]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:37:22 default-k8s-diff-port-321287 kubelet[3892]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:37:22 default-k8s-diff-port-321287 kubelet[3892]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:37:33 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:37:33.573186    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:37:44 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:37:44.573633    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:37:59 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:37:59.573203    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:38:12 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:38:12.573439    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:38:22 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:38:22.656763    3892 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:38:22 default-k8s-diff-port-321287 kubelet[3892]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:38:22 default-k8s-diff-port-321287 kubelet[3892]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:38:22 default-k8s-diff-port-321287 kubelet[3892]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:38:23 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:38:23.573269    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:38:38 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:38:38.574283    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:38:50 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:38:50.573311    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:39:04 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:39:04.572890    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:39:17 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:39:17.573008    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:39:22 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:39:22.657653    3892 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:39:22 default-k8s-diff-port-321287 kubelet[3892]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:39:22 default-k8s-diff-port-321287 kubelet[3892]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:39:22 default-k8s-diff-port-321287 kubelet[3892]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:39:32 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:39:32.573468    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	
	* 
	* ==> storage-provisioner [9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41] <==
	* I0817 22:30:40.634482       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 22:30:40.662159       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 22:30:40.663028       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 22:30:40.696318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 22:30:40.698116       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-321287_1a13d18e-b8eb-4dab-8860-b65ca51cff07!
	I0817 22:30:40.705200       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"369f6126-d0c1-4c9f-b15f-d77f0f393dd4", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-321287_1a13d18e-b8eb-4dab-8860-b65ca51cff07 became leader
	I0817 22:30:40.810215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-321287_1a13d18e-b8eb-4dab-8860-b65ca51cff07!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-321287 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-lw5bp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-321287 describe pod metrics-server-74d5c6b9c-lw5bp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-321287 describe pod metrics-server-74d5c6b9c-lw5bp: exit status 1 (80.138463ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-lw5bp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-321287 describe pod metrics-server-74d5c6b9c-lw5bp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0817 22:32:56.109454  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:33:09.343885  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 22:33:30.601839  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 22:33:37.090746  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:33:42.966423  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:34:19.154469  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:34:35.282603  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:35:06.015303  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:35:20.385656  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:35:31.665466  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 22:35:50.284339  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:35:55.683591  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:35:58.331387  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:36:43.431055  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:36:54.714308  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 22:37:07.552873  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 22:37:14.045186  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:37:56.109737  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:38:09.344434  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-294781 -n old-k8s-version-294781
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-08-17 22:41:33.027612955 +0000 UTC m=+5466.668380682
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-294781 -n old-k8s-version-294781
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-294781 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-294781 logs -n 25: (1.672797087s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-975779 sudo cat                              | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo find                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo crio                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-975779                                       | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-340676 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | disable-driver-mounts-340676                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:17 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-294781        | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-525875             | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-437183            | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-321287  | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-294781             | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-525875                  | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:29 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-437183                 | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-321287       | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC | 17 Aug 23 22:30 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 22:20:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 22:20:16.712686  255491 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:20:16.712825  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.712835  255491 out.go:309] Setting ErrFile to fd 2...
	I0817 22:20:16.712839  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.713062  255491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:20:16.713667  255491 out.go:303] Setting JSON to false
	I0817 22:20:16.714624  255491 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25342,"bootTime":1692285475,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:20:16.714682  255491 start.go:138] virtualization: kvm guest
	I0817 22:20:16.717535  255491 out.go:177] * [default-k8s-diff-port-321287] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:20:16.719151  255491 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:20:16.720536  255491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:20:16.719158  255491 notify.go:220] Checking for updates...
	I0817 22:20:16.724470  255491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:20:16.726182  255491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:20:16.727902  255491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:20:16.729516  255491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:20:16.731373  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:20:16.731749  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.731825  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.746961  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0817 22:20:16.747404  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.748088  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.748116  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.748449  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.748618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.748847  255491 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:20:16.749194  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.749239  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.764882  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I0817 22:20:16.765357  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.765874  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.765901  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.766289  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.766480  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.802457  255491 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:20:16.804215  255491 start.go:298] selected driver: kvm2
	I0817 22:20:16.804235  255491 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 Cl
usterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.804379  255491 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:20:16.805157  255491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.805248  255491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:20:16.821166  255491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:20:16.821564  255491 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 22:20:16.821606  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:20:16.821619  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:20:16.821631  255491 start_flags.go:319] config:
	{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.821815  255491 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.823863  255491 out.go:177] * Starting control plane node default-k8s-diff-port-321287 in cluster default-k8s-diff-port-321287
	I0817 22:20:16.825296  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:20:16.825350  255491 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 22:20:16.825365  255491 cache.go:57] Caching tarball of preloaded images
	I0817 22:20:16.825521  255491 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 22:20:16.825536  255491 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 22:20:16.825660  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:20:16.825870  255491 start.go:365] acquiring machines lock for default-k8s-diff-port-321287: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:20:17.790384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:20.862432  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:26.942301  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:30.014393  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:36.094411  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:39.166376  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:45.246382  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:48.318418  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:54.398388  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:57.470394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:03.550380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:06.622365  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:12.702351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:15.774370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:21.854413  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:24.926351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:31.006415  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:34.078332  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:40.158437  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:43.230410  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:49.310359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:52.382386  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:58.462394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:01.534395  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:07.614359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:10.686384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:16.766363  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:19.838352  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:25.918380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:28.990416  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:35.070383  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:38.142364  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:44.222341  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:47.294387  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:53.374378  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:56.446375  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:02.526335  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:05.598406  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:11.678435  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:14.750370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:20.830484  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:23.902346  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:29.982456  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:33.054379  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:39.134436  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:42.206472  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:48.286396  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:51.358348  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:54.362645  255057 start.go:369] acquired machines lock for "no-preload-525875" in 4m31.301140971s
	I0817 22:23:54.362883  255057 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:23:54.362929  255057 fix.go:54] fixHost starting: 
	I0817 22:23:54.363423  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:23:54.363467  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:23:54.379127  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0817 22:23:54.379699  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:23:54.380334  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:23:54.380357  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:23:54.380797  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:23:54.381004  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:23:54.381209  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:23:54.383099  255057 fix.go:102] recreateIfNeeded on no-preload-525875: state=Stopped err=<nil>
	I0817 22:23:54.383145  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	W0817 22:23:54.383332  255057 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:23:54.385187  255057 out.go:177] * Restarting existing kvm2 VM for "no-preload-525875" ...
	I0817 22:23:54.360325  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:23:54.360394  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:23:54.362467  254975 machine.go:91] provisioned docker machine in 4m37.411699893s
	I0817 22:23:54.362520  254975 fix.go:56] fixHost completed within 4m37.434281244s
	I0817 22:23:54.362529  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 4m37.434304432s
	W0817 22:23:54.362577  254975 start.go:672] error starting host: provision: host is not running
	W0817 22:23:54.363017  254975 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0817 22:23:54.363033  254975 start.go:687] Will try again in 5 seconds ...
	I0817 22:23:54.386615  255057 main.go:141] libmachine: (no-preload-525875) Calling .Start
	I0817 22:23:54.386791  255057 main.go:141] libmachine: (no-preload-525875) Ensuring networks are active...
	I0817 22:23:54.387647  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network default is active
	I0817 22:23:54.387973  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network mk-no-preload-525875 is active
	I0817 22:23:54.388332  255057 main.go:141] libmachine: (no-preload-525875) Getting domain xml...
	I0817 22:23:54.389183  255057 main.go:141] libmachine: (no-preload-525875) Creating domain...
	I0817 22:23:55.639391  255057 main.go:141] libmachine: (no-preload-525875) Waiting to get IP...
	I0817 22:23:55.640405  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.640824  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.640956  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.640807  256033 retry.go:31] will retry after 256.854902ms: waiting for machine to come up
	I0817 22:23:55.899499  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.900003  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.900027  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.899976  256033 retry.go:31] will retry after 327.686689ms: waiting for machine to come up
	I0817 22:23:56.229604  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.230132  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.230156  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.230040  256033 retry.go:31] will retry after 464.52975ms: waiting for machine to come up
	I0817 22:23:56.695962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.696359  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.696397  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.696313  256033 retry.go:31] will retry after 556.975938ms: waiting for machine to come up
	I0817 22:23:57.255156  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.255625  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.255664  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.255564  256033 retry.go:31] will retry after 654.756806ms: waiting for machine to come up
	I0817 22:23:57.911407  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.911781  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.911805  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.911733  256033 retry.go:31] will retry after 915.751745ms: waiting for machine to come up
	I0817 22:23:59.364671  254975 start.go:365] acquiring machines lock for old-k8s-version-294781: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:23:58.828834  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:58.829178  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:58.829236  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:58.829153  256033 retry.go:31] will retry after 1.176413613s: waiting for machine to come up
	I0817 22:24:00.006988  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:00.007533  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:00.007603  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:00.007525  256033 retry.go:31] will retry after 1.031006631s: waiting for machine to come up
	I0817 22:24:01.039920  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:01.040354  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:01.040386  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:01.040293  256033 retry.go:31] will retry after 1.781447675s: waiting for machine to come up
	I0817 22:24:02.823240  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:02.823711  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:02.823755  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:02.823652  256033 retry.go:31] will retry after 1.47392319s: waiting for machine to come up
	I0817 22:24:04.299094  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:04.299543  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:04.299572  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:04.299479  256033 retry.go:31] will retry after 1.990284782s: waiting for machine to come up
	I0817 22:24:06.292369  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:06.292831  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:06.292862  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:06.292749  256033 retry.go:31] will retry after 3.34318874s: waiting for machine to come up
	I0817 22:24:09.637907  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:09.638389  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:09.638423  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:09.638335  256033 retry.go:31] will retry after 3.298106143s: waiting for machine to come up
	I0817 22:24:12.939215  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939668  255057 main.go:141] libmachine: (no-preload-525875) Found IP for machine: 192.168.61.196
	I0817 22:24:12.939692  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has current primary IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939709  255057 main.go:141] libmachine: (no-preload-525875) Reserving static IP address...
	I0817 22:24:12.940293  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.940330  255057 main.go:141] libmachine: (no-preload-525875) Reserved static IP address: 192.168.61.196
	I0817 22:24:12.940347  255057 main.go:141] libmachine: (no-preload-525875) DBG | skip adding static IP to network mk-no-preload-525875 - found existing host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"}
	I0817 22:24:12.940364  255057 main.go:141] libmachine: (no-preload-525875) DBG | Getting to WaitForSSH function...
	I0817 22:24:12.940381  255057 main.go:141] libmachine: (no-preload-525875) Waiting for SSH to be available...
	I0817 22:24:12.942523  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.942835  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.942870  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.943013  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH client type: external
	I0817 22:24:12.943058  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa (-rw-------)
	I0817 22:24:12.943104  255057 main.go:141] libmachine: (no-preload-525875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:12.943125  255057 main.go:141] libmachine: (no-preload-525875) DBG | About to run SSH command:
	I0817 22:24:12.943135  255057 main.go:141] libmachine: (no-preload-525875) DBG | exit 0
	I0817 22:24:14.123211  255215 start.go:369] acquired machines lock for "embed-certs-437183" in 4m31.345681226s
	I0817 22:24:14.123281  255215 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:14.123298  255215 fix.go:54] fixHost starting: 
	I0817 22:24:14.123769  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:14.123822  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:14.141321  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0817 22:24:14.141722  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:14.142372  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:24:14.142409  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:14.142871  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:14.143076  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:14.143300  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:24:14.144928  255215 fix.go:102] recreateIfNeeded on embed-certs-437183: state=Stopped err=<nil>
	I0817 22:24:14.144960  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	W0817 22:24:14.145216  255215 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:14.148036  255215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-437183" ...
	I0817 22:24:13.033987  255057 main.go:141] libmachine: (no-preload-525875) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:13.034450  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetConfigRaw
	I0817 22:24:13.035251  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.037756  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038141  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.038176  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038475  255057 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/config.json ...
	I0817 22:24:13.038679  255057 machine.go:88] provisioning docker machine ...
	I0817 22:24:13.038704  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.038922  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039086  255057 buildroot.go:166] provisioning hostname "no-preload-525875"
	I0817 22:24:13.039109  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039238  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.041385  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041666  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.041698  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041838  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.042022  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042206  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042396  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.042612  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.043170  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.043189  255057 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-525875 && echo "no-preload-525875" | sudo tee /etc/hostname
	I0817 22:24:13.177388  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-525875
	
	I0817 22:24:13.177433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.180249  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180571  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.180599  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180808  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.181054  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181224  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181371  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.181544  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.181969  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.181994  255057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-525875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-525875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-525875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:13.307614  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:13.307675  255057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:13.307719  255057 buildroot.go:174] setting up certificates
	I0817 22:24:13.307731  255057 provision.go:83] configureAuth start
	I0817 22:24:13.307745  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.308044  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.311084  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311457  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.311491  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311665  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.313712  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314066  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.314101  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314252  255057 provision.go:138] copyHostCerts
	I0817 22:24:13.314354  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:13.314397  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:13.314495  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:13.314610  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:13.314623  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:13.314661  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:13.314735  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:13.314745  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:13.314779  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:13.314841  255057 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.no-preload-525875 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube no-preload-525875]
	I0817 22:24:13.395589  255057 provision.go:172] copyRemoteCerts
	I0817 22:24:13.395693  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:13.395724  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.398603  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.398936  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.398972  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.399154  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.399379  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.399566  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.399717  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.487194  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:13.510918  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 22:24:13.534013  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:13.556876  255057 provision.go:86] duration metric: configureAuth took 249.122979ms
	I0817 22:24:13.556910  255057 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:13.557143  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:13.557265  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.560140  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560483  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.560514  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560748  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.560965  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561143  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561274  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.561516  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.562128  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.562155  255057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:13.863145  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:13.863181  255057 machine.go:91] provisioned docker machine in 824.487372ms
	I0817 22:24:13.863206  255057 start.go:300] post-start starting for "no-preload-525875" (driver="kvm2")
	I0817 22:24:13.863219  255057 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:13.863247  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.863636  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:13.863681  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.866612  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.866950  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.867000  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.867115  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.867333  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.867524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.867695  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.957157  255057 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:13.961765  255057 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:13.961801  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:13.961919  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:13.962002  255057 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:13.962116  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:13.971105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:13.999336  255057 start.go:303] post-start completed in 136.111451ms
	I0817 22:24:13.999367  255057 fix.go:56] fixHost completed within 19.636437946s
	I0817 22:24:13.999391  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.002294  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002689  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.002717  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002995  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.003236  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003572  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.003744  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:14.004145  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:14.004160  255057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:14.122987  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311054.069328214
	
	I0817 22:24:14.123011  255057 fix.go:206] guest clock: 1692311054.069328214
	I0817 22:24:14.123019  255057 fix.go:219] Guest: 2023-08-17 22:24:14.069328214 +0000 UTC Remote: 2023-08-17 22:24:13.999370872 +0000 UTC m=+291.082280559 (delta=69.957342ms)
	I0817 22:24:14.123080  255057 fix.go:190] guest clock delta is within tolerance: 69.957342ms
	I0817 22:24:14.123087  255057 start.go:83] releasing machines lock for "no-preload-525875", held for 19.760401588s
	I0817 22:24:14.123125  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.123445  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:14.126573  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.126925  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.126962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.127146  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127781  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127974  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.128071  255057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:14.128125  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.128226  255057 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:14.128258  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.131020  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131333  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131367  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131390  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.131715  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.131789  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131829  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131895  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.131975  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.132057  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.132156  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.132272  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.132425  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.219665  255057 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:14.247437  255057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:14.400674  255057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:14.408384  255057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:14.408502  255057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:14.423811  255057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:14.423860  255057 start.go:466] detecting cgroup driver to use...
	I0817 22:24:14.423953  255057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:14.436628  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:14.448671  255057 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:14.448765  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:14.461946  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:14.475294  255057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:14.581194  255057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:14.708045  255057 docker.go:212] disabling docker service ...
	I0817 22:24:14.708110  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:14.722033  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:14.733323  255057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:14.857587  255057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:14.980798  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:14.994728  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:15.012428  255057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:15.012505  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.021683  255057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:15.021763  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.031095  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.040825  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.050770  255057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:15.060644  255057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:15.068941  255057 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:15.069022  255057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:15.081634  255057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:15.090552  255057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:15.205174  255057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:15.383127  255057 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:15.383224  255057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:15.391893  255057 start.go:534] Will wait 60s for crictl version
	I0817 22:24:15.391983  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.398121  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:15.450273  255057 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:15.450368  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.506757  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.560170  255057 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.1 ...
	I0817 22:24:14.149845  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Start
	I0817 22:24:14.150032  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring networks are active...
	I0817 22:24:14.150803  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network default is active
	I0817 22:24:14.151110  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network mk-embed-certs-437183 is active
	I0817 22:24:14.151492  255215 main.go:141] libmachine: (embed-certs-437183) Getting domain xml...
	I0817 22:24:14.152247  255215 main.go:141] libmachine: (embed-certs-437183) Creating domain...
	I0817 22:24:15.472135  255215 main.go:141] libmachine: (embed-certs-437183) Waiting to get IP...
	I0817 22:24:15.473014  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.473413  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.473492  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.473421  256157 retry.go:31] will retry after 194.38634ms: waiting for machine to come up
	I0817 22:24:15.670047  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.670479  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.670528  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.670445  256157 retry.go:31] will retry after 332.988154ms: waiting for machine to come up
	I0817 22:24:16.005357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.005862  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.005898  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.005790  256157 retry.go:31] will retry after 376.364025ms: waiting for machine to come up
	I0817 22:24:16.384423  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.384866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.384916  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.384805  256157 retry.go:31] will retry after 392.048125ms: waiting for machine to come up
	I0817 22:24:16.778356  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.778744  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.778780  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.778683  256157 retry.go:31] will retry after 688.962088ms: waiting for machine to come up
	I0817 22:24:17.469767  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:17.470257  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:17.470287  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:17.470211  256157 retry.go:31] will retry after 660.617465ms: waiting for machine to come up
	I0817 22:24:15.561695  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:15.564750  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565097  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:15.565127  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565409  255057 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:15.569673  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:15.584980  255057 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:24:15.585030  255057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:15.617365  255057 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0817 22:24:15.617396  255057 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.0-rc.1 registry.k8s.io/kube-controller-manager:v1.28.0-rc.1 registry.k8s.io/kube-scheduler:v1.28.0-rc.1 registry.k8s.io/kube-proxy:v1.28.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:24:15.617470  255057 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.617497  255057 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.617529  255057 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.617606  255057 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.617541  255057 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.617637  255057 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0817 22:24:15.617507  255057 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.617985  255057 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619154  255057 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0817 22:24:15.619338  255057 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619355  255057 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.619350  255057 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.619369  255057 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.619335  255057 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.619381  255057 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.619414  255057 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.793551  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.793935  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.796339  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.797436  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.806385  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.813161  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0817 22:24:15.840200  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.935464  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.940863  255057 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0817 22:24:15.940940  255057 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.940881  255057 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" does not exist at hash "046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd" in container runtime
	I0817 22:24:15.941028  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.941031  255057 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.941115  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952609  255057 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" does not exist at hash "e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef" in container runtime
	I0817 22:24:15.952687  255057 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.952709  255057 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0817 22:24:15.952741  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952751  255057 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.952790  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.007640  255057 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" does not exist at hash "2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d" in container runtime
	I0817 22:24:16.007686  255057 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.007740  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099763  255057 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.0-rc.1" does not exist at hash "cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8" in container runtime
	I0817 22:24:16.099817  255057 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.099873  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099909  255057 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0817 22:24:16.099969  255057 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.099980  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:16.100019  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.100052  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:16.100127  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:16.100145  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:16.100198  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.105175  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.197301  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0817 22:24:16.197377  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197418  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197432  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197437  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.197476  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.197421  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:16.197520  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197535  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.214043  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0817 22:24:16.214189  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:16.225659  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1 (exists)
	I0817 22:24:16.225690  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225750  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225882  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.225973  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.229070  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1 (exists)
	I0817 22:24:16.229235  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1 (exists)
	I0817 22:24:16.258828  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0817 22:24:16.258905  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0817 22:24:16.258990  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0817 22:24:16.259013  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:18.132851  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:18.133243  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:18.133310  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:18.133225  256157 retry.go:31] will retry after 900.178694ms: waiting for machine to come up
	I0817 22:24:19.035179  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:19.035579  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:19.035615  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:19.035514  256157 retry.go:31] will retry after 1.198702878s: waiting for machine to come up
	I0817 22:24:20.236711  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:20.237240  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:20.237273  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:20.237201  256157 retry.go:31] will retry after 1.809846012s: waiting for machine to come up
	I0817 22:24:22.048866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:22.049357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:22.049392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:22.049300  256157 retry.go:31] will retry after 1.671738979s: waiting for machine to come up
	I0817 22:24:18.395405  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1: (2.169611406s)
	I0817 22:24:18.395443  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1 from cache
	I0817 22:24:18.395478  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (2.169478272s)
	I0817 22:24:18.395493  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.136469625s)
	I0817 22:24:18.395493  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:18.395509  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0817 22:24:18.395512  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1 (exists)
	I0817 22:24:18.395560  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:20.871009  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1: (2.475415377s)
	I0817 22:24:20.871043  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1 from cache
	I0817 22:24:20.871073  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:20.871129  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:23.722312  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:23.722829  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:23.722864  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:23.722757  256157 retry.go:31] will retry after 1.856182792s: waiting for machine to come up
	I0817 22:24:25.580432  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:25.580936  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:25.580969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:25.580873  256157 retry.go:31] will retry after 2.404448523s: waiting for machine to come up
	I0817 22:24:23.529377  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1: (2.658213494s)
	I0817 22:24:23.529418  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1 from cache
	I0817 22:24:23.529456  255057 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:23.529532  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:24.907071  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.377507339s)
	I0817 22:24:24.907105  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0817 22:24:24.907135  255057 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:24.907203  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:27.988784  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:27.989226  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:27.989252  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:27.989214  256157 retry.go:31] will retry after 4.145677854s: waiting for machine to come up
	I0817 22:24:32.139031  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139722  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has current primary IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139755  255215 main.go:141] libmachine: (embed-certs-437183) Found IP for machine: 192.168.39.186
	I0817 22:24:32.139768  255215 main.go:141] libmachine: (embed-certs-437183) Reserving static IP address...
	I0817 22:24:32.140361  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.140408  255215 main.go:141] libmachine: (embed-certs-437183) Reserved static IP address: 192.168.39.186
	I0817 22:24:32.140428  255215 main.go:141] libmachine: (embed-certs-437183) DBG | skip adding static IP to network mk-embed-certs-437183 - found existing host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"}
	I0817 22:24:32.140450  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Getting to WaitForSSH function...
	I0817 22:24:32.140465  255215 main.go:141] libmachine: (embed-certs-437183) Waiting for SSH to be available...
	I0817 22:24:32.142752  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143141  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.143192  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143343  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH client type: external
	I0817 22:24:32.143392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa (-rw-------)
	I0817 22:24:32.143431  255215 main.go:141] libmachine: (embed-certs-437183) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:32.143459  255215 main.go:141] libmachine: (embed-certs-437183) DBG | About to run SSH command:
	I0817 22:24:32.143475  255215 main.go:141] libmachine: (embed-certs-437183) DBG | exit 0
	I0817 22:24:32.246211  255215 main.go:141] libmachine: (embed-certs-437183) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:32.246582  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetConfigRaw
	I0817 22:24:32.247284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.249789  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250204  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.250237  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250567  255215 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/config.json ...
	I0817 22:24:32.250808  255215 machine.go:88] provisioning docker machine ...
	I0817 22:24:32.250831  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:32.251049  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251209  255215 buildroot.go:166] provisioning hostname "embed-certs-437183"
	I0817 22:24:32.251230  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251344  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.253729  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254094  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.254124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254276  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.254434  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254654  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254817  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.254981  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.255466  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.255481  255215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-437183 && echo "embed-certs-437183" | sudo tee /etc/hostname
	I0817 22:24:32.412247  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-437183
	
	I0817 22:24:32.412284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.415194  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415508  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.415561  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415666  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.415910  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416113  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416297  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.416501  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.417004  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.417024  255215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-437183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-437183/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-437183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:32.559200  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:32.559253  255215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:32.559282  255215 buildroot.go:174] setting up certificates
	I0817 22:24:32.559299  255215 provision.go:83] configureAuth start
	I0817 22:24:32.559313  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.559696  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.562469  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.562960  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.562989  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.563141  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.565760  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566120  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.566178  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566344  255215 provision.go:138] copyHostCerts
	I0817 22:24:32.566427  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:32.566443  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:32.566504  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:32.566633  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:32.566642  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:32.566676  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:32.566730  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:32.566738  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:32.566755  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:32.566803  255215 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-437183 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube embed-certs-437183]
	I0817 22:24:31.437386  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.530148826s)
	I0817 22:24:31.437453  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0817 22:24:31.437478  255057 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:31.437578  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:32.398228  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0817 22:24:32.398294  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:32.398359  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:33.487487  255491 start.go:369] acquired machines lock for "default-k8s-diff-port-321287" in 4m16.661569765s
	I0817 22:24:33.487552  255491 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:33.487569  255491 fix.go:54] fixHost starting: 
	I0817 22:24:33.488059  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:33.488104  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:33.506430  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0817 22:24:33.506958  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:33.507587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:24:33.507618  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:33.508078  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:33.508296  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:33.508471  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:24:33.510492  255491 fix.go:102] recreateIfNeeded on default-k8s-diff-port-321287: state=Stopped err=<nil>
	I0817 22:24:33.510539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	W0817 22:24:33.510738  255491 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:33.512965  255491 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-321287" ...
	I0817 22:24:32.687763  255215 provision.go:172] copyRemoteCerts
	I0817 22:24:32.687835  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:32.687864  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.690614  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.690921  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.690963  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.691253  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.691469  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.691631  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.691745  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:32.788388  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:32.811861  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:32.835407  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0817 22:24:32.858542  255215 provision.go:86] duration metric: configureAuth took 299.225654ms
	I0817 22:24:32.858581  255215 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:32.858850  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:32.858989  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.861726  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862140  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.862186  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862436  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.862717  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.862961  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.863135  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.863321  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.863744  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.863762  255215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:33.202904  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:33.202942  255215 machine.go:91] provisioned docker machine in 952.11856ms
	I0817 22:24:33.202986  255215 start.go:300] post-start starting for "embed-certs-437183" (driver="kvm2")
	I0817 22:24:33.203002  255215 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:33.203039  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.203427  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:33.203465  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.206544  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.206969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.207004  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.207154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.207407  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.207591  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.207747  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.304648  255215 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:33.309404  255215 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:33.309435  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:33.309536  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:33.309635  255215 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:33.309752  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:33.318682  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:33.343830  255215 start.go:303] post-start completed in 140.8201ms
	I0817 22:24:33.343870  255215 fix.go:56] fixHost completed within 19.220571855s
	I0817 22:24:33.343901  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.347196  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347625  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.347658  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347927  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.348154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348336  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348487  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.348741  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:33.349346  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:33.349361  255215 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:33.487290  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311073.433845199
	
	I0817 22:24:33.487319  255215 fix.go:206] guest clock: 1692311073.433845199
	I0817 22:24:33.487331  255215 fix.go:219] Guest: 2023-08-17 22:24:33.433845199 +0000 UTC Remote: 2023-08-17 22:24:33.343875474 +0000 UTC m=+290.714391364 (delta=89.969725ms)
	I0817 22:24:33.487370  255215 fix.go:190] guest clock delta is within tolerance: 89.969725ms
	I0817 22:24:33.487378  255215 start.go:83] releasing machines lock for "embed-certs-437183", held for 19.364124776s
	I0817 22:24:33.487412  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.487714  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:33.490444  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.490945  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.490975  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.491191  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492024  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492278  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492378  255215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:33.492440  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.492569  255215 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:33.492600  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.495461  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495742  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495836  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.495879  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.496130  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496147  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496287  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496341  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496445  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496604  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496605  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496792  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.496886  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.634234  255215 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:33.642529  255215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:33.802107  255215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:33.808439  255215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:33.808520  255215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:33.823947  255215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:33.823975  255215 start.go:466] detecting cgroup driver to use...
	I0817 22:24:33.824058  255215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:33.839665  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:33.854435  255215 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:33.854512  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:33.871530  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:33.886466  255215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:34.017312  255215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:34.152720  255215 docker.go:212] disabling docker service ...
	I0817 22:24:34.152811  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:34.170506  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:34.186072  255215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:34.327678  255215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:34.450774  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:34.468330  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:34.491610  255215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:34.491684  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.506266  255215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:34.506360  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.517471  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.531351  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.542363  255215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:34.553383  255215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:34.562937  255215 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:34.563029  255215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:34.575978  255215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:34.588500  255215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:34.715821  255215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:34.912771  255215 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:34.912853  255215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:34.918377  255215 start.go:534] Will wait 60s for crictl version
	I0817 22:24:34.918445  255215 ssh_runner.go:195] Run: which crictl
	I0817 22:24:34.922462  255215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:34.962654  255215 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:34.962754  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.020574  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.078516  255215 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:33.514448  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Start
	I0817 22:24:33.514667  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring networks are active...
	I0817 22:24:33.515504  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network default is active
	I0817 22:24:33.515973  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network mk-default-k8s-diff-port-321287 is active
	I0817 22:24:33.516607  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Getting domain xml...
	I0817 22:24:33.517407  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Creating domain...
	I0817 22:24:35.032992  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting to get IP...
	I0817 22:24:35.034213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034833  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.034747  256286 retry.go:31] will retry after 255.561446ms: waiting for machine to come up
	I0817 22:24:35.292497  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293071  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293110  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.293035  256286 retry.go:31] will retry after 265.433217ms: waiting for machine to come up
	I0817 22:24:35.560591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561221  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.561138  256286 retry.go:31] will retry after 429.726379ms: waiting for machine to come up
	I0817 22:24:35.993046  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993573  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.993482  256286 retry.go:31] will retry after 583.273043ms: waiting for machine to come up
	I0817 22:24:36.578452  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578943  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578983  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:36.578889  256286 retry.go:31] will retry after 504.577651ms: waiting for machine to come up
	I0817 22:24:35.080561  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:35.083955  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084338  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:35.084376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084624  255215 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:35.088994  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:35.104758  255215 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:35.104814  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:35.140529  255215 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:35.140606  255215 ssh_runner.go:195] Run: which lz4
	I0817 22:24:35.144869  255215 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:35.149131  255215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:35.149168  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:24:37.067793  255215 crio.go:444] Took 1.922962 seconds to copy over tarball
	I0817 22:24:37.067867  255215 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:34.276465  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (1.878070898s)
	I0817 22:24:34.276495  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1 from cache
	I0817 22:24:34.276528  255057 cache_images.go:123] Successfully loaded all cached images
	I0817 22:24:34.276535  255057 cache_images.go:92] LoadImages completed in 18.659123421s
	I0817 22:24:34.276651  255057 ssh_runner.go:195] Run: crio config
	I0817 22:24:34.349440  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:34.349470  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:34.349525  255057 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:34.349559  255057 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8443 KubernetesVersion:v1.28.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-525875 NodeName:no-preload-525875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:34.349737  255057 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-525875"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:34.349852  255057 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-525875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:34.349927  255057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0-rc.1
	I0817 22:24:34.361082  255057 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:34.361211  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:34.370571  255057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0817 22:24:34.390596  255057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 22:24:34.409602  255057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0817 22:24:34.431076  255057 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:34.435869  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:34.448753  255057 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875 for IP: 192.168.61.196
	I0817 22:24:34.448854  255057 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:34.449077  255057 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:34.449125  255057 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:34.449229  255057 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/client.key
	I0817 22:24:34.449287  255057 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key.0d67e2f2
	I0817 22:24:34.449320  255057 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key
	I0817 22:24:34.449438  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:34.449466  255057 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:34.449476  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:34.449499  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:34.449523  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:34.449545  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:34.449586  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:34.450600  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:34.481454  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:24:34.514638  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:34.539306  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:24:34.565390  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:34.595648  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:34.628105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:34.654925  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:34.684138  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:34.709433  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:34.736933  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:34.772217  255057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:34.790940  255057 ssh_runner.go:195] Run: openssl version
	I0817 22:24:34.800419  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:34.811545  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819623  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819697  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.825793  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:34.836531  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:34.847239  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852331  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852394  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.861659  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:34.871817  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:34.883257  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889654  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889728  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.897773  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:34.909259  255057 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:34.914775  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:34.921549  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:34.928370  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:34.934849  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:34.941470  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:34.949932  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:34.956863  255057 kubeadm.go:404] StartCluster: {Name:no-preload-525875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525
875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:34.957036  255057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:34.957123  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:35.005195  255057 cri.go:89] found id: ""
	I0817 22:24:35.005282  255057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:35.015727  255057 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:35.015754  255057 kubeadm.go:636] restartCluster start
	I0817 22:24:35.015821  255057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:35.025333  255057 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.026796  255057 kubeconfig.go:92] found "no-preload-525875" server: "https://192.168.61.196:8443"
	I0817 22:24:35.030361  255057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:35.040698  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.040754  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.055650  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.055675  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.055719  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.066812  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.567215  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.567291  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.580471  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.066958  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.067035  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.081758  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.567234  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.567320  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.582474  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.066970  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.067060  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.079066  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.567780  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.567887  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.583652  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.085672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086184  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086222  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.086130  256286 retry.go:31] will retry after 660.028004ms: waiting for machine to come up
	I0817 22:24:37.747563  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748056  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748086  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.748020  256286 retry.go:31] will retry after 798.952498ms: waiting for machine to come up
	I0817 22:24:38.548762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549243  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549276  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:38.549193  256286 retry.go:31] will retry after 1.15249289s: waiting for machine to come up
	I0817 22:24:39.703164  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703739  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703773  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:39.703675  256286 retry.go:31] will retry after 1.300284471s: waiting for machine to come up
	I0817 22:24:41.006289  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006781  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006814  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:41.006717  256286 retry.go:31] will retry after 1.500753962s: waiting for machine to come up
	I0817 22:24:40.155737  255215 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.087825588s)
	I0817 22:24:40.155771  255215 crio.go:451] Took 3.087946 seconds to extract the tarball
	I0817 22:24:40.155784  255215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:24:40.196940  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:40.238837  255215 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:24:40.238863  255215 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:24:40.238934  255215 ssh_runner.go:195] Run: crio config
	I0817 22:24:40.302526  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:24:40.302552  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:40.302572  255215 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:40.302593  255215 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-437183 NodeName:embed-certs-437183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:40.302793  255215 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-437183"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:40.302860  255215 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-437183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:40.302914  255215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:24:40.312428  255215 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:40.312517  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:40.321824  255215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0817 22:24:40.340069  255215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:24:40.358609  255215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0817 22:24:40.376546  255215 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:40.380576  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:40.394264  255215 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183 for IP: 192.168.39.186
	I0817 22:24:40.394310  255215 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:40.394509  255215 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:40.394569  255215 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:40.394678  255215 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/client.key
	I0817 22:24:40.394749  255215 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key.d0691019
	I0817 22:24:40.394810  255215 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key
	I0817 22:24:40.394956  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:40.394999  255215 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:40.395013  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:40.395056  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:40.395096  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:40.395127  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:40.395197  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:40.396122  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:40.421809  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:24:40.447412  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:40.472678  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:24:40.501303  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:40.528016  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:40.553741  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:40.581792  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:40.609270  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:40.634901  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:40.659698  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:40.685767  255215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:40.704114  255215 ssh_runner.go:195] Run: openssl version
	I0817 22:24:40.709921  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:40.720035  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725167  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725232  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.731054  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:40.741277  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:40.751649  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757538  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757621  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.763574  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:40.773786  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:40.784152  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790448  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790529  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.796689  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:40.806968  255215 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:40.811858  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:40.818172  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:40.824439  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:40.830588  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:40.836734  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:40.842857  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:40.849072  255215 kubeadm.go:404] StartCluster: {Name:embed-certs-437183 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:40.849208  255215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:40.849269  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:40.882040  255215 cri.go:89] found id: ""
	I0817 22:24:40.882132  255215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:40.893833  255215 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:40.893859  255215 kubeadm.go:636] restartCluster start
	I0817 22:24:40.893926  255215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:40.906498  255215 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.907768  255215 kubeconfig.go:92] found "embed-certs-437183" server: "https://192.168.39.186:8443"
	I0817 22:24:40.910282  255215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:40.921945  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.922021  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.933335  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.933360  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.933417  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.944168  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.444996  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.445109  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.457502  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.944752  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.944881  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.960929  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.444350  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.444464  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.461555  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.066927  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.067043  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.082831  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.567259  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.567347  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.581544  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.067112  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.067211  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.078859  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.566916  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.567075  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.582637  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.067188  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.067286  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.082771  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.567236  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.567331  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.583192  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.067806  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.067953  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.082962  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.567559  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.567664  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.582761  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.067267  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.067357  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.078631  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.567181  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.567299  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.583270  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.509044  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509662  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509688  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:42.509599  256286 retry.go:31] will retry after 2.726859315s: waiting for machine to come up
	I0817 22:24:45.239162  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239727  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239756  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:45.239667  256286 retry.go:31] will retry after 2.868820101s: waiting for machine to come up
	I0817 22:24:42.944983  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.945083  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.960949  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.444415  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.444541  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.460157  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.944659  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.944757  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.960506  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.444408  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.444544  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.460666  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.944252  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.944358  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.956137  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.444667  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.444779  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.460524  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.944710  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.945003  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.961038  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.444556  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.444684  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.459345  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.944760  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.944858  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.961217  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:47.444786  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.444935  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.460748  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.067683  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.067794  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.083038  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.567750  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.567850  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.579427  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.066928  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.067014  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.078671  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.567463  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.567559  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.579377  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.041151  255057 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:45.041202  255057 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:45.041218  255057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:45.041279  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:45.080480  255057 cri.go:89] found id: ""
	I0817 22:24:45.080569  255057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:45.096518  255057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:45.107778  255057 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:45.107880  255057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117115  255057 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117151  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.269517  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.790366  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.988106  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.124121  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.219342  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:46.219438  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.241849  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.795050  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.295314  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.795361  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.111566  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112173  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:48.112079  256286 retry.go:31] will retry after 3.129130141s: waiting for machine to come up
	I0817 22:24:51.245244  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245759  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245788  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:51.245707  256286 retry.go:31] will retry after 4.573749963s: waiting for machine to come up
	I0817 22:24:47.944303  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.944406  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.960613  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.445144  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.445245  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.460221  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.944726  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.944811  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.958575  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.444744  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.444875  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.460348  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.944986  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.945117  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.958396  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.445013  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:50.445110  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:50.459941  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.922423  255215 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:50.922493  255215 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:50.922513  255215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:50.922581  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:50.964064  255215 cri.go:89] found id: ""
	I0817 22:24:50.964154  255215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:50.980513  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:50.990086  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:50.990152  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999907  255215 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999935  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:51.147593  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.150655  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.002996323s)
	I0817 22:24:52.150694  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.367611  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.461186  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.534447  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:52.534547  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:52.551513  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.295087  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.794596  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.817042  255057 api_server.go:72] duration metric: took 2.597699698s to wait for apiserver process to appear ...
	I0817 22:24:48.817069  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:48.817086  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.817615  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:48.817653  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.818012  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:49.318894  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.160567  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.160612  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.160627  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.246065  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.246117  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.318300  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.394871  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.394932  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:52.818493  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.825349  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.825391  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.318277  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.324705  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:53.324751  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.818240  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.823823  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:24:53.834528  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:24:53.834573  255057 api_server.go:131] duration metric: took 5.01749639s to wait for apiserver health ...
	I0817 22:24:53.834586  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:53.834596  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:53.836827  255057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:53.838602  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:24:53.850880  255057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:24:53.871556  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:24:53.886793  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:24:53.886858  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:24:53.886875  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:24:53.886889  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:24:53.886902  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:24:53.886922  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:24:53.886939  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:24:53.886948  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:24:53.886961  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:24:53.886975  255057 system_pods.go:74] duration metric: took 15.392207ms to wait for pod list to return data ...
	I0817 22:24:53.886988  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:24:53.891527  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:24:53.891589  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:24:53.891630  255057 node_conditions.go:105] duration metric: took 4.635197ms to run NodePressure ...
	I0817 22:24:53.891656  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:54.230065  255057 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239113  255057 kubeadm.go:787] kubelet initialised
	I0817 22:24:54.239146  255057 kubeadm.go:788] duration metric: took 9.048225ms waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239159  255057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:54.251454  255057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.266584  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266619  255057 pod_ready.go:81] duration metric: took 15.127554ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.266633  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266645  255057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.278901  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278932  255057 pod_ready.go:81] duration metric: took 12.266962ms waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.278944  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278952  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.297982  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298020  255057 pod_ready.go:81] duration metric: took 19.058778ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.298032  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298047  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.309929  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309967  255057 pod_ready.go:81] duration metric: took 11.898508ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.309980  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309991  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.676448  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676495  255057 pod_ready.go:81] duration metric: took 366.48994ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.676507  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676547  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.078351  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078392  255057 pod_ready.go:81] duration metric: took 401.831269ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.078405  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078416  255057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.476059  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476101  255057 pod_ready.go:81] duration metric: took 397.677369ms waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.476111  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476121  255057 pod_ready.go:38] duration metric: took 1.236947103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:55.476143  255057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:24:55.487413  255057 ops.go:34] apiserver oom_adj: -16
	I0817 22:24:55.487448  255057 kubeadm.go:640] restartCluster took 20.471686915s
	I0817 22:24:55.487459  255057 kubeadm.go:406] StartCluster complete in 20.530629906s
	I0817 22:24:55.487482  255057 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.487591  255057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:24:55.489799  255057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.490091  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:24:55.490202  255057 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:24:55.490349  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:55.490375  255057 addons.go:69] Setting storage-provisioner=true in profile "no-preload-525875"
	I0817 22:24:55.490380  255057 addons.go:69] Setting metrics-server=true in profile "no-preload-525875"
	I0817 22:24:55.490397  255057 addons.go:231] Setting addon storage-provisioner=true in "no-preload-525875"
	I0817 22:24:55.490404  255057 addons.go:231] Setting addon metrics-server=true in "no-preload-525875"
	W0817 22:24:55.490409  255057 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:24:55.490435  255057 addons.go:69] Setting default-storageclass=true in profile "no-preload-525875"
	I0817 22:24:55.490465  255057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-525875"
	I0817 22:24:55.490474  255057 host.go:66] Checking if "no-preload-525875" exists ...
	W0817 22:24:55.490413  255057 addons.go:240] addon metrics-server should already be in state true
	I0817 22:24:55.490547  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.491607  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.491742  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492181  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492232  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492255  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492291  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.503335  255057 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-525875" context rescaled to 1 replicas
	I0817 22:24:55.503399  255057 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:24:55.505836  255057 out.go:177] * Verifying Kubernetes components...
	I0817 22:24:55.507438  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:24:55.512841  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0817 22:24:55.513126  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0817 22:24:55.513241  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42607
	I0817 22:24:55.513441  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513567  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513770  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.514042  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514082  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514128  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514159  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514577  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514595  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514708  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514733  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514804  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.515081  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.515186  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515223  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.515651  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515699  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.532135  255057 addons.go:231] Setting addon default-storageclass=true in "no-preload-525875"
	W0817 22:24:55.532171  255057 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:24:55.532205  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.532614  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.532665  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.535464  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I0817 22:24:55.537205  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:24:55.537544  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.537676  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.538005  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538022  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538197  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538209  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538328  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538574  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538694  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.538757  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.540907  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.541221  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.543481  255057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:55.545233  255057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:24:55.820955  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.821534  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Found IP for machine: 192.168.50.30
	I0817 22:24:55.821557  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserving static IP address...
	I0817 22:24:55.821590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has current primary IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.822134  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.822169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | skip adding static IP to network mk-default-k8s-diff-port-321287 - found existing host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"}
	I0817 22:24:55.822189  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Getting to WaitForSSH function...
	I0817 22:24:55.822212  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserved static IP address: 192.168.50.30
	I0817 22:24:55.822225  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for SSH to be available...
	I0817 22:24:55.825198  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.825630  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825769  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH client type: external
	I0817 22:24:55.825802  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa (-rw-------)
	I0817 22:24:55.825837  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:55.825855  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | About to run SSH command:
	I0817 22:24:55.825874  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | exit 0
	I0817 22:24:55.923224  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:55.923669  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetConfigRaw
	I0817 22:24:55.924434  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:55.927453  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.927935  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.927987  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.928304  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:24:55.928581  255491 machine.go:88] provisioning docker machine ...
	I0817 22:24:55.928610  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:55.928818  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.928963  255491 buildroot.go:166] provisioning hostname "default-k8s-diff-port-321287"
	I0817 22:24:55.928984  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.929169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:55.931672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932179  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.932213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:55.932606  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.932862  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.933008  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:55.933228  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:55.933895  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:55.933917  255491 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-321287 && echo "default-k8s-diff-port-321287" | sudo tee /etc/hostname
	I0817 22:24:56.066560  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-321287
	
	I0817 22:24:56.066599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.070072  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070509  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.070590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070901  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.071175  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071377  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071589  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.071813  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.072479  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.072511  255491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-321287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-321287/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-321287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:56.210857  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:56.210897  255491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:56.210954  255491 buildroot.go:174] setting up certificates
	I0817 22:24:56.210968  255491 provision.go:83] configureAuth start
	I0817 22:24:56.210981  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:56.211435  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:56.214305  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214711  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.214762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214931  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.217766  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218200  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.218245  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218444  255491 provision.go:138] copyHostCerts
	I0817 22:24:56.218519  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:56.218533  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:56.218609  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:56.218728  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:56.218738  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:56.218769  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:56.218846  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:56.218856  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:56.218886  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:56.218953  255491 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-321287 san=[192.168.50.30 192.168.50.30 localhost 127.0.0.1 minikube default-k8s-diff-port-321287]
	I0817 22:24:56.289985  255491 provision.go:172] copyRemoteCerts
	I0817 22:24:56.290068  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:56.290104  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.293536  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.293996  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.294027  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.294218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.294456  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.294675  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.294866  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.386746  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:56.413448  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 22:24:56.438758  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 22:24:56.467489  255491 provision.go:86] duration metric: configureAuth took 256.504259ms
	I0817 22:24:56.467525  255491 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:56.467792  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:56.467917  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.470870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.471373  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471601  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.471839  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472048  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.472441  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.473139  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.473162  255491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:57.100503  254975 start.go:369] acquired machines lock for "old-k8s-version-294781" in 57.735745135s
	I0817 22:24:57.100571  254975 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:57.100583  254975 fix.go:54] fixHost starting: 
	I0817 22:24:57.101120  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:57.101172  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:57.121393  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0817 22:24:57.122017  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:57.122807  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:24:57.122834  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:57.123289  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:57.123463  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:24:57.123584  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:24:57.125545  254975 fix.go:102] recreateIfNeeded on old-k8s-version-294781: state=Stopped err=<nil>
	I0817 22:24:57.125580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	W0817 22:24:57.125759  254975 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:57.127853  254975 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-294781" ...
	I0817 22:24:55.546816  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:24:55.546839  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:24:55.546870  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.545324  255057 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.546955  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:24:55.546971  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.551364  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552354  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552580  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
	I0817 22:24:55.552920  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.552950  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553052  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.553160  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553171  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.553238  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553408  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553592  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553747  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553751  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553805  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.553823  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.553914  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553952  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554237  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.554648  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554839  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.554878  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.594781  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0817 22:24:55.595253  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.595928  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.595955  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.596358  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.596659  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.598866  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.599111  255057 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.599123  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:24:55.599141  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.602520  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.602895  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.602924  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.603114  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.603334  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.603537  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.603678  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.693508  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:24:55.693535  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:24:55.720303  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.739691  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:24:55.739725  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:24:55.752809  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.793480  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:55.793512  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:24:55.805075  255057 node_ready.go:35] waiting up to 6m0s for node "no-preload-525875" to be "Ready" ...
	I0817 22:24:55.805164  255057 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 22:24:55.834328  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:57.451781  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.731427598s)
	I0817 22:24:57.451824  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.698971636s)
	I0817 22:24:57.451845  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451859  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.451876  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451887  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452756  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.452808  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.452818  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.452832  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.452842  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452965  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453000  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453009  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453019  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453027  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453173  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453247  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453270  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453295  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453306  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453677  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453709  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453720  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.455299  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.455300  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.455325  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.564475  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.730071346s)
	I0817 22:24:57.564539  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.564551  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565087  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565160  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565170  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565185  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.565217  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565483  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565530  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565539  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565550  255057 addons.go:467] Verifying addon metrics-server=true in "no-preload-525875"
	I0817 22:24:57.569420  255057 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:24:53.063998  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:53.564081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.064081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.564321  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.064476  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.090168  255215 api_server.go:72] duration metric: took 2.555721263s to wait for apiserver process to appear ...
	I0817 22:24:55.090200  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:55.090223  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:57.571712  255057 addons.go:502] enable addons completed in 2.081503451s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:24:57.882753  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:56.835353  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:56.835388  255491 machine.go:91] provisioned docker machine in 906.787255ms
	I0817 22:24:56.835401  255491 start.go:300] post-start starting for "default-k8s-diff-port-321287" (driver="kvm2")
	I0817 22:24:56.835415  255491 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:56.835460  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:56.835881  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:56.835925  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.838868  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839240  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.839274  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839366  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.839581  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.839808  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.839994  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.932979  255491 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:56.937642  255491 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:56.937675  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:56.937770  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:56.937877  255491 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:56.938003  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:56.949478  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:56.975557  255491 start.go:303] post-start completed in 140.136722ms
	I0817 22:24:56.975589  255491 fix.go:56] fixHost completed within 23.488019817s
	I0817 22:24:56.975618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.979039  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979486  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.979549  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979673  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.979951  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980152  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980301  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.980507  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.981194  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.981211  255491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:57.100308  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311097.042275817
	
	I0817 22:24:57.100341  255491 fix.go:206] guest clock: 1692311097.042275817
	I0817 22:24:57.100351  255491 fix.go:219] Guest: 2023-08-17 22:24:57.042275817 +0000 UTC Remote: 2023-08-17 22:24:56.975593678 +0000 UTC m=+280.298176937 (delta=66.682139ms)
	I0817 22:24:57.100389  255491 fix.go:190] guest clock delta is within tolerance: 66.682139ms
	I0817 22:24:57.100396  255491 start.go:83] releasing machines lock for "default-k8s-diff-port-321287", held for 23.61286841s
	I0817 22:24:57.100436  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.100813  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:57.104312  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.104719  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.104807  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.105050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105744  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105949  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.106081  255491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:57.106133  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.106268  255491 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:57.106395  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.110145  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110531  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.110577  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.111166  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.111352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.111402  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.111567  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.112700  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.112751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.112980  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.113206  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.113379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.113534  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.200530  255491 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:57.232758  255491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:57.405574  255491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:57.413543  255491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:57.413637  255491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:57.438687  255491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:57.438718  255491 start.go:466] detecting cgroup driver to use...
	I0817 22:24:57.438808  255491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:57.458572  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:57.475320  255491 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:57.475397  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:57.493585  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:57.512274  255491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:57.650975  255491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:57.788299  255491 docker.go:212] disabling docker service ...
	I0817 22:24:57.788395  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:57.806350  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:57.819894  255491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:57.966925  255491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:58.088274  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:58.107210  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:58.129691  255491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:58.129766  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.141217  255491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:58.141388  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.153376  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.166177  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.177326  255491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:58.191627  255491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:58.203913  255491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:58.204001  255491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:58.222901  255491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:58.233280  255491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:58.366794  255491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:58.603364  255491 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:58.603462  255491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:58.616285  255491 start.go:534] Will wait 60s for crictl version
	I0817 22:24:58.616397  255491 ssh_runner.go:195] Run: which crictl
	I0817 22:24:58.622933  255491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:58.668866  255491 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:58.668961  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.735680  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.800442  255491 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:59.550327  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.550367  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:59.550385  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:59.646890  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.646928  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:00.147486  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.160700  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.160745  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:00.647077  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.685626  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.685678  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.147134  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.156042  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:01.156083  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.647569  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.657291  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:25:01.686204  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:01.686260  255215 api_server.go:131] duration metric: took 6.59605111s to wait for apiserver health ...
	I0817 22:25:01.686274  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:25:01.686283  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:01.688856  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:58.802321  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:58.806172  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.806661  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:58.806696  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.807029  255491 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:58.813045  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:58.830937  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:58.831008  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:58.880355  255491 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:58.880469  255491 ssh_runner.go:195] Run: which lz4
	I0817 22:24:58.886729  255491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:58.893418  255491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:58.893496  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:25:01.093233  255491 crio.go:444] Took 2.206544 seconds to copy over tarball
	I0817 22:25:01.093422  255491 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:57.129390  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Start
	I0817 22:24:57.134160  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring networks are active...
	I0817 22:24:57.134190  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network default is active
	I0817 22:24:57.134205  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network mk-old-k8s-version-294781 is active
	I0817 22:24:57.134214  254975 main.go:141] libmachine: (old-k8s-version-294781) Getting domain xml...
	I0817 22:24:57.134228  254975 main.go:141] libmachine: (old-k8s-version-294781) Creating domain...
	I0817 22:24:58.694125  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting to get IP...
	I0817 22:24:58.695714  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:58.696209  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:58.696356  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:58.696219  256493 retry.go:31] will retry after 307.640559ms: waiting for machine to come up
	I0817 22:24:59.006214  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.008497  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.008536  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.006931  256493 retry.go:31] will retry after 316.904618ms: waiting for machine to come up
	I0817 22:24:59.325929  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.326634  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.326672  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.326593  256493 retry.go:31] will retry after 466.068046ms: waiting for machine to come up
	I0817 22:24:59.794718  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.795268  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.795294  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.795200  256493 retry.go:31] will retry after 399.064857ms: waiting for machine to come up
	I0817 22:25:00.196015  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.196733  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.196760  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.196632  256493 retry.go:31] will retry after 553.183294ms: waiting for machine to come up
	I0817 22:25:00.751687  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.752341  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.752366  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.752283  256493 retry.go:31] will retry after 815.149471ms: waiting for machine to come up
	I0817 22:25:01.568847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:01.569679  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:01.569709  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:01.569547  256493 retry.go:31] will retry after 827.38414ms: waiting for machine to come up
	I0817 22:25:01.690788  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:01.726335  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:01.804837  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:01.844074  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:01.844121  255215 system_pods.go:61] "coredns-5d78c9869d-twvdv" [f8305fa5-f0e7-4090-af8f-a9eefe00be65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:01.844134  255215 system_pods.go:61] "etcd-embed-certs-437183" [409212ae-25eb-4221-b380-d73562531eb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:01.844143  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [a378c1e7-c439-427f-b56e-7aeb2397dda2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:01.844149  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [7d8c33ff-f8bd-4ca8-a1cd-7e03a3c1ea55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:01.844156  255215 system_pods.go:61] "kube-proxy-tqlkl" [3dc68d59-da16-4a8e-8664-24c280769e22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:01.844162  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [54addcee-6a78-4a9d-9b15-a02e79ac92be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:01.844169  255215 system_pods.go:61] "metrics-server-74d5c6b9c-h5tt6" [6f8a838b-81d8-444d-aba1-fe46fefe8815] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:01.844175  255215 system_pods.go:61] "storage-provisioner" [65cd2cbe-dcb1-4842-af27-551c8d0a93d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:01.844182  255215 system_pods.go:74] duration metric: took 39.323312ms to wait for pod list to return data ...
	I0817 22:25:01.844194  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:01.857431  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:01.857471  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:01.857485  255215 node_conditions.go:105] duration metric: took 13.285661ms to run NodePressure ...
	I0817 22:25:01.857511  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:02.318085  255215 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329089  255215 kubeadm.go:787] kubelet initialised
	I0817 22:25:02.329122  255215 kubeadm.go:788] duration metric: took 10.998414ms waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329133  255215 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.338233  255215 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:59.891549  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.386499  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.889146  255057 node_ready.go:49] node "no-preload-525875" has status "Ready":"True"
	I0817 22:25:02.889193  255057 node_ready.go:38] duration metric: took 7.084075756s waiting for node "no-preload-525875" to be "Ready" ...
	I0817 22:25:02.889209  255057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.915138  255057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926622  255057 pod_ready.go:92] pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:02.926662  255057 pod_ready.go:81] duration metric: took 11.479543ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926677  255057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.597215  255491 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.503742232s)
	I0817 22:25:04.597254  255491 crio.go:451] Took 3.503924 seconds to extract the tarball
	I0817 22:25:04.597269  255491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:04.640799  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:04.683452  255491 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:25:04.683478  255491 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:25:04.683564  255491 ssh_runner.go:195] Run: crio config
	I0817 22:25:04.755546  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:04.755579  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:04.755618  255491 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:04.755646  255491 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8444 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-321287 NodeName:default-k8s-diff-port-321287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:25:04.755865  255491 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-321287"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:04.755964  255491 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-321287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0817 22:25:04.756040  255491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:25:04.768800  255491 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:04.768884  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:04.779179  255491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0817 22:25:04.798848  255491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:04.818088  255491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0817 22:25:04.839021  255491 ssh_runner.go:195] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:04.843996  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:04.858954  255491 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287 for IP: 192.168.50.30
	I0817 22:25:04.858992  255491 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:04.859193  255491 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:04.859263  255491 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:04.859371  255491 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/client.key
	I0817 22:25:04.859452  255491 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key.2a920f45
	I0817 22:25:04.859519  255491 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key
	I0817 22:25:04.859673  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:04.859717  255491 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:04.859733  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:04.859766  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:04.859800  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:04.859839  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:04.859901  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:04.860739  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:04.893191  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:25:04.923817  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:04.953192  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:25:04.985353  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:05.015743  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:05.043565  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:05.072283  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:05.102360  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:05.131090  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:05.158164  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:05.183921  255491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:05.201231  255491 ssh_runner.go:195] Run: openssl version
	I0817 22:25:05.207477  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:05.218696  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224473  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224551  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.230753  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:05.244810  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:05.255480  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.260972  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.261054  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.267724  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:05.280466  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:05.291975  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298403  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298519  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.306541  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:05.318878  255491 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:05.324755  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:05.333167  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:05.341869  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:05.350173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:05.357173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:05.364289  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:05.372301  255491 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-
k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:05.372435  255491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:05.372493  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:05.409127  255491 cri.go:89] found id: ""
	I0817 22:25:05.409211  255491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:05.420288  255491 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:05.420316  255491 kubeadm.go:636] restartCluster start
	I0817 22:25:05.420401  255491 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:05.431336  255491 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.433035  255491 kubeconfig.go:92] found "default-k8s-diff-port-321287" server: "https://192.168.50.30:8444"
	I0817 22:25:05.437153  255491 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:05.446894  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.446956  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.459319  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.459353  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.459412  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.472543  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.973294  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.973386  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.986474  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.473007  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.473141  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.485870  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:02.398531  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:02.399142  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:02.399174  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:02.399045  256493 retry.go:31] will retry after 1.143040413s: waiting for machine to come up
	I0817 22:25:03.543421  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:03.544040  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:03.544076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:03.543971  256493 retry.go:31] will retry after 1.654291601s: waiting for machine to come up
	I0817 22:25:05.200880  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:05.201405  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:05.201435  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:05.201350  256493 retry.go:31] will retry after 1.752048888s: waiting for machine to come up
	I0817 22:25:04.379203  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.872822  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:04.499009  255057 pod_ready.go:92] pod "etcd-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.499040  255057 pod_ready.go:81] duration metric: took 1.572354603s waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.499057  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761691  255057 pod_ready.go:92] pod "kube-apiserver-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.761719  255057 pod_ready.go:81] duration metric: took 262.653075ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761734  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769937  255057 pod_ready.go:92] pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.769968  255057 pod_ready.go:81] duration metric: took 8.225874ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769983  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881406  255057 pod_ready.go:92] pod "kube-proxy-pzpk2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.881444  255057 pod_ready.go:81] duration metric: took 111.452654ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881461  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643623  255057 pod_ready.go:92] pod "kube-scheduler-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:05.643648  255057 pod_ready.go:81] duration metric: took 762.178998ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643658  255057 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:07.695130  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.972803  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.972898  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.985259  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.473416  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.473551  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.485378  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.973567  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.973708  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.989454  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.472762  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.472894  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.489910  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.972732  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.972822  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.984958  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.473569  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.473709  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.490412  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.972908  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.972987  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.986072  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.473333  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.473429  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.485656  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.973314  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.973423  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.989391  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:11.472953  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.473077  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.485192  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.956350  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:06.956874  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:06.956904  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:06.956830  256493 retry.go:31] will retry after 2.09338178s: waiting for machine to come up
	I0817 22:25:09.052006  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:09.052516  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:09.052549  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:09.052447  256493 retry.go:31] will retry after 3.023234706s: waiting for machine to come up
	I0817 22:25:08.877674  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:09.370723  255215 pod_ready.go:92] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:09.370754  255215 pod_ready.go:81] duration metric: took 7.032445075s waiting for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:09.370767  255215 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893038  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:10.893076  255215 pod_ready.go:81] duration metric: took 1.522300039s waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893091  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918300  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:11.918330  255215 pod_ready.go:81] duration metric: took 1.025229003s waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918347  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.192198  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:12.692398  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:11.973001  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.973083  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.984794  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.473426  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.473527  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.489566  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.972736  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.972840  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.984972  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.473572  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.473665  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.485760  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.972804  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.972952  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.984788  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.473423  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.473501  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.484892  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.973394  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.973481  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.985492  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:15.447933  255491 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:15.447967  255491 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:15.447983  255491 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:15.448044  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:15.483471  255491 cri.go:89] found id: ""
	I0817 22:25:15.483596  255491 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:15.500292  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:15.510630  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:15.510695  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520738  255491 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520771  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:15.635683  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:12.079485  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:12.080041  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:12.080069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:12.079986  256493 retry.go:31] will retry after 4.097355523s: waiting for machine to come up
	I0817 22:25:16.178550  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:16.179032  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:16.179063  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:16.178988  256493 retry.go:31] will retry after 4.178327275s: waiting for machine to come up
	I0817 22:25:14.176089  255215 pod_ready.go:102] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:14.679850  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.679881  255215 pod_ready.go:81] duration metric: took 2.761525031s waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.679894  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685308  255215 pod_ready.go:92] pod "kube-proxy-tqlkl" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.685339  255215 pod_ready.go:81] duration metric: took 5.435708ms waiting for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685352  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967073  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.967099  255215 pod_ready.go:81] duration metric: took 281.740411ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967110  255215 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:17.277033  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:15.190295  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:17.193522  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:16.723896  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0881723s)
	I0817 22:25:16.723933  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:16.940953  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.025208  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.110784  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:17.110880  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.123610  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.645363  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.145697  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.645211  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.145515  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.645764  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.665892  255491 api_server.go:72] duration metric: took 2.555110324s to wait for apiserver process to appear ...
	I0817 22:25:19.665920  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:19.665938  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:20.359726  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360375  254975 main.go:141] libmachine: (old-k8s-version-294781) Found IP for machine: 192.168.72.56
	I0817 22:25:20.360408  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserving static IP address...
	I0817 22:25:20.360426  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has current primary IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360798  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserved static IP address: 192.168.72.56
	I0817 22:25:20.360843  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.360866  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting for SSH to be available...
	I0817 22:25:20.360898  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | skip adding static IP to network mk-old-k8s-version-294781 - found existing host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"}
	I0817 22:25:20.360918  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Getting to WaitForSSH function...
	I0817 22:25:20.363319  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.363721  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.363767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.364016  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH client type: external
	I0817 22:25:20.364069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa (-rw-------)
	I0817 22:25:20.364115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:25:20.364135  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | About to run SSH command:
	I0817 22:25:20.364175  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | exit 0
	I0817 22:25:20.454327  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | SSH cmd err, output: <nil>: 
	I0817 22:25:20.454772  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetConfigRaw
	I0817 22:25:20.455585  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.458846  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.459420  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459910  254975 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/config.json ...
	I0817 22:25:20.460207  254975 machine.go:88] provisioning docker machine ...
	I0817 22:25:20.460240  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:20.460489  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460712  254975 buildroot.go:166] provisioning hostname "old-k8s-version-294781"
	I0817 22:25:20.460743  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460912  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.463811  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464166  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.464216  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464391  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.464610  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464779  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464936  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.465157  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.465566  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.465578  254975 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-294781 && echo "old-k8s-version-294781" | sudo tee /etc/hostname
	I0817 22:25:20.604184  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-294781
	
	I0817 22:25:20.604223  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.607313  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.607668  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.607706  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.608091  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.608335  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608511  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608656  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.608845  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.609344  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.609368  254975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-294781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-294781/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-294781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:25:20.731574  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:25:20.731639  254975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:25:20.731679  254975 buildroot.go:174] setting up certificates
	I0817 22:25:20.731697  254975 provision.go:83] configureAuth start
	I0817 22:25:20.731717  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.732057  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.735344  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.735748  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.735780  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.736038  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.738896  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739346  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.739384  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739562  254975 provision.go:138] copyHostCerts
	I0817 22:25:20.739634  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:25:20.739650  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:25:20.739733  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:25:20.739875  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:25:20.739889  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:25:20.739921  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:25:20.740027  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:25:20.740040  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:25:20.740069  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:25:20.740159  254975 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-294781 san=[192.168.72.56 192.168.72.56 localhost 127.0.0.1 minikube old-k8s-version-294781]
	I0817 22:25:20.937408  254975 provision.go:172] copyRemoteCerts
	I0817 22:25:20.937480  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:25:20.937508  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.940609  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941074  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.941115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941294  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.941469  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.941678  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.941899  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.033976  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:25:21.062438  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 22:25:21.090325  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:25:21.116263  254975 provision.go:86] duration metric: configureAuth took 384.54455ms
	I0817 22:25:21.116295  254975 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:25:21.116550  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:25:21.116667  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.119767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120295  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.120351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.120735  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.120898  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.121114  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.121330  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.121982  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.122011  254975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:25:21.449644  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:25:21.449675  254975 machine.go:91] provisioned docker machine in 989.449203ms
	I0817 22:25:21.449686  254975 start.go:300] post-start starting for "old-k8s-version-294781" (driver="kvm2")
	I0817 22:25:21.449696  254975 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:25:21.449713  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.450065  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:25:21.450112  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.453436  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.453847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.453893  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.454092  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.454320  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.454501  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.454682  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.544501  254975 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:25:21.549102  254975 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:25:21.549128  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:25:21.549201  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:25:21.549301  254975 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:25:21.549425  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:25:21.559169  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:21.585459  254975 start.go:303] post-start completed in 135.754284ms
	I0817 22:25:21.585496  254975 fix.go:56] fixHost completed within 24.48491231s
	I0817 22:25:21.585531  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.588650  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589045  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.589076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589236  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.589445  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589638  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589810  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.590026  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.590596  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.590621  254975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:25:21.704138  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311121.622295369
	
	I0817 22:25:21.704162  254975 fix.go:206] guest clock: 1692311121.622295369
	I0817 22:25:21.704170  254975 fix.go:219] Guest: 2023-08-17 22:25:21.622295369 +0000 UTC Remote: 2023-08-17 22:25:21.585502401 +0000 UTC m=+364.810906249 (delta=36.792968ms)
	I0817 22:25:21.704193  254975 fix.go:190] guest clock delta is within tolerance: 36.792968ms
	I0817 22:25:21.704200  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 24.603659499s
	I0817 22:25:21.704228  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.704524  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:21.707198  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707512  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.707555  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707715  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708285  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708516  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708605  254975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:25:21.708670  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.708790  254975 ssh_runner.go:195] Run: cat /version.json
	I0817 22:25:21.708816  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.711462  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711744  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711858  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.711906  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712090  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712154  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.712219  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712326  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712347  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712539  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712541  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712749  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712766  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.712936  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:19.775731  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.777036  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:19.693695  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:22.189616  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.818518  254975 ssh_runner.go:195] Run: systemctl --version
	I0817 22:25:21.824498  254975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:25:21.971461  254975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:25:21.978188  254975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:25:21.978271  254975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:25:21.993704  254975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:25:21.993738  254975 start.go:466] detecting cgroup driver to use...
	I0817 22:25:21.993820  254975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:25:22.009074  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:25:22.022874  254975 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:25:22.022935  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:25:22.036508  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:25:22.050919  254975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:25:22.174894  254975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:25:22.307776  254975 docker.go:212] disabling docker service ...
	I0817 22:25:22.307863  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:25:22.322017  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:25:22.334550  254975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:25:22.439721  254975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:25:22.554591  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:25:22.570460  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:25:22.588685  254975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0817 22:25:22.588767  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.599716  254975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:25:22.599801  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.611990  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.623873  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.636093  254975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:25:22.647438  254975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:25:22.657266  254975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:25:22.657338  254975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:25:22.672463  254975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:25:22.683508  254975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:25:22.799912  254975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:25:22.995704  254975 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:25:22.995816  254975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:25:23.003199  254975 start.go:534] Will wait 60s for crictl version
	I0817 22:25:23.003280  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:23.008350  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:25:23.042651  254975 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:25:23.042763  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.093624  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.142140  254975 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0817 22:25:24.666188  255491 api_server.go:269] stopped: https://192.168.50.30:8444/healthz: Get "https://192.168.50.30:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:24.666264  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:24.903729  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:24.903775  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:25.404125  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.420215  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.420261  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:25.903943  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.914463  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.914514  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:26.403966  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:26.414021  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:25:26.437708  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:26.437750  255491 api_server.go:131] duration metric: took 6.771821605s to wait for apiserver health ...
	I0817 22:25:26.437779  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:26.437789  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:26.440095  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:26.441921  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:26.469640  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:26.514785  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:26.532553  255491 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:26.532616  255491 system_pods.go:61] "coredns-5d78c9869d-v74x9" [1c42e9be-16fa-47c2-ab04-9ec805320760] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:26.532631  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [a3655572-9d89-4ef6-85db-85dc454d1021] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:26.532659  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [6786ac16-78df-4909-8542-0952af5beff6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:26.532675  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [ac8085d0-db9c-4229-b816-4753b7cfcae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:26.532686  255491 system_pods.go:61] "kube-proxy-4d9dx" [22447888-6570-47b7-baac-a5842688de9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:26.532697  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [bfcfc726-e659-4cb9-ad36-9887ddfaf170] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:26.532713  255491 system_pods.go:61] "metrics-server-74d5c6b9c-25l6w" [205dcf88-9d10-416b-8fd0-c93939208c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:26.532722  255491 system_pods.go:61] "storage-provisioner" [be486251-ebb9-4d0b-85c9-fe04e76634e3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:26.532738  255491 system_pods.go:74] duration metric: took 17.92531ms to wait for pod list to return data ...
	I0817 22:25:26.532751  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:26.541133  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:26.541180  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:26.541197  255491 node_conditions.go:105] duration metric: took 8.431415ms to run NodePressure ...
	I0817 22:25:26.541228  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:23.143729  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:23.146678  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147145  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:23.147178  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147433  254975 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0817 22:25:23.151860  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:23.165714  254975 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 22:25:23.165805  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:23.207234  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:23.207334  254975 ssh_runner.go:195] Run: which lz4
	I0817 22:25:23.211497  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:25:23.216272  254975 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:25:23.216309  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0817 22:25:25.170164  254975 crio.go:444] Took 1.958697 seconds to copy over tarball
	I0817 22:25:25.170253  254975 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:25:23.792764  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.276276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:24.193719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.692837  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.873863  255491 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:26.878982  255491 kubeadm.go:787] kubelet initialised
	I0817 22:25:26.879005  255491 kubeadm.go:788] duration metric: took 5.10797ms waiting for restarted kubelet to initialise ...
	I0817 22:25:26.879014  255491 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:26.885772  255491 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:29.448692  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:28.464409  254975 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.294096057s)
	I0817 22:25:28.464448  254975 crio.go:451] Took 3.294247 seconds to extract the tarball
	I0817 22:25:28.464461  254975 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:28.505546  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:28.550245  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:28.550282  254975 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:25:28.550393  254975 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.550419  254975 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.550425  254975 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.550466  254975 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.550416  254975 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.550388  254975 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.550543  254975 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0817 22:25:28.550382  254975 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551670  254975 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551673  254975 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.551765  254975 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.551779  254975 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.551793  254975 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0817 22:25:28.551814  254975 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.551841  254975 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.552852  254975 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.736900  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.746950  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.747215  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.749256  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.754813  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0817 22:25:28.767639  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.778459  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.834796  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.845176  254975 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0817 22:25:28.845233  254975 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.845295  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.896784  254975 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0817 22:25:28.896843  254975 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.896901  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919129  254975 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0817 22:25:28.919247  254975 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.919192  254975 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0817 22:25:28.919301  254975 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.919320  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919332  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972779  254975 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0817 22:25:28.972831  254975 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0817 22:25:28.972863  254975 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0817 22:25:28.972898  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972901  254975 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.973013  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.986909  254975 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0817 22:25:28.986957  254975 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.987007  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:29.083047  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:29.083137  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:29.083204  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:29.083276  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0817 22:25:29.083227  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0817 22:25:29.083354  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:29.083408  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:29.214678  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0817 22:25:29.214743  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0817 22:25:29.214777  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0817 22:25:29.214847  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0817 22:25:29.214934  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.221086  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0817 22:25:29.221101  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0817 22:25:29.221162  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0817 22:25:29.223655  254975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0817 22:25:29.223684  254975 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.223753  254975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0817 22:25:30.774685  254975 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550895846s)
	I0817 22:25:30.774722  254975 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0817 22:25:30.774776  254975 cache_images.go:92] LoadImages completed in 2.224475745s
	W0817 22:25:30.774942  254975 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0817 22:25:30.775051  254975 ssh_runner.go:195] Run: crio config
	I0817 22:25:30.840592  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:30.840623  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:30.840650  254975 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:30.840680  254975 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.56 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-294781 NodeName:old-k8s-version-294781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0817 22:25:30.840917  254975 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-294781"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-294781
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.56:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:30.841030  254975 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-294781 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:25:30.841111  254975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0817 22:25:30.850719  254975 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:30.850818  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:30.862807  254975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0817 22:25:30.882111  254975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:30.900496  254975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0817 22:25:30.921163  254975 ssh_runner.go:195] Run: grep 192.168.72.56	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:30.925789  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:30.941284  254975 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781 for IP: 192.168.72.56
	I0817 22:25:30.941335  254975 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:30.941556  254975 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:30.941617  254975 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:30.941728  254975 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/client.key
	I0817 22:25:30.941792  254975 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key.aa8f9bd0
	I0817 22:25:30.941827  254975 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key
	I0817 22:25:30.941948  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:30.941994  254975 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:30.942005  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:30.942039  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:30.942107  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:30.942141  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:30.942200  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:30.942953  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:30.973814  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:25:31.003939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:31.035137  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:25:31.063172  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:31.092059  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:31.120881  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:31.148113  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:31.175102  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:31.204939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:31.231548  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:31.263908  254975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:31.287143  254975 ssh_runner.go:195] Run: openssl version
	I0817 22:25:31.293380  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:31.307058  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313520  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313597  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.321182  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:31.332412  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:31.343318  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.348972  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.349044  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.355568  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:31.366257  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:31.376489  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382818  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382919  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.390171  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:31.400360  254975 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:31.406177  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:31.413881  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:31.422198  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:31.429468  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:31.437072  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:31.444150  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:31.450952  254975 kubeadm.go:404] StartCluster: {Name:old-k8s-version-294781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version
-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:31.451064  254975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:31.451140  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:31.489009  254975 cri.go:89] found id: ""
	I0817 22:25:31.489098  254975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:31.499098  254975 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:31.499126  254975 kubeadm.go:636] restartCluster start
	I0817 22:25:31.499191  254975 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:31.510909  254975 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.512049  254975 kubeconfig.go:92] found "old-k8s-version-294781" server: "https://192.168.72.56:8443"
	I0817 22:25:31.514634  254975 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:31.525968  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.526039  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.539397  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.539423  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.539485  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.552492  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:28.276789  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:30.406349  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:29.190524  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.195732  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.919929  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.415784  255491 pod_ready.go:92] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:32.415817  255491 pod_ready.go:81] duration metric: took 5.530013816s waiting for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:32.415840  255491 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:34.435177  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.435405  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.053512  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.053604  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.065409  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.553555  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.553647  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.566402  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.052703  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.052785  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.069027  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.552583  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.552724  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.566692  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.053418  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.053493  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.065794  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.553389  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.553490  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.566130  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.052663  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.052753  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.065276  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.553446  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.553544  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.567754  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.053326  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.053407  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.066562  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.553098  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.553200  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.564869  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.777224  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:35.273781  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.276847  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:33.690890  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.190746  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.435673  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.435712  255491 pod_ready.go:81] duration metric: took 5.019858859s waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.435724  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441582  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.441602  255491 pod_ready.go:81] duration metric: took 5.870633ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441614  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448615  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.448643  255491 pod_ready.go:81] duration metric: took 7.021551ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448656  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454742  255491 pod_ready.go:92] pod "kube-proxy-4d9dx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.454768  255491 pod_ready.go:81] duration metric: took 6.104572ms waiting for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454780  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462598  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.462623  255491 pod_ready.go:81] duration metric: took 7.834341ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462637  255491 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:39.741207  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.053213  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.053363  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.065752  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:37.553604  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.553709  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.569278  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.052848  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.052956  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.065011  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.552809  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.552915  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.564702  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.053287  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.053378  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.065004  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.553557  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.553654  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.565776  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.053269  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.053352  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.065089  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.552595  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.552718  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.564921  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.053531  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:41.053617  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:41.065803  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.526724  254975 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:41.526774  254975 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:41.526788  254975 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:41.526858  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:41.560831  254975 cri.go:89] found id: ""
	I0817 22:25:41.560931  254975 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:41.577926  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:41.587081  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:41.587169  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596656  254975 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596690  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:41.716908  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:39.776178  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.275946  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:38.193834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:40.691324  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.692667  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:41.745307  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:44.242440  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.243469  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.840419  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.123468828s)
	I0817 22:25:42.840454  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.062568  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.150374  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.265948  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:43.266043  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.284133  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.804512  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.304041  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.803961  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.828050  254975 api_server.go:72] duration metric: took 1.562100837s to wait for apiserver process to appear ...
	I0817 22:25:44.828085  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:44.828102  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.828570  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:44.828611  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.829005  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:45.329868  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.276477  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.775206  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:45.189460  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:47.690349  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:48.741121  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.742231  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.330553  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:50.330619  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.714219  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.714253  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:51.714268  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.756012  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.756052  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:49.276427  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.775567  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:49.698834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:52.190711  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.829442  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.888999  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:51.889031  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.329747  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.337398  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.337432  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.829817  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.839157  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.839187  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:53.329580  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:53.336858  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:25:53.347151  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:25:53.347191  254975 api_server.go:131] duration metric: took 8.519097199s to wait for apiserver health ...
	I0817 22:25:53.347204  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:53.347212  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:53.349243  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:52.743242  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:55.241261  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:53.350976  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:53.364808  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:53.397606  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:53.411868  254975 system_pods.go:59] 7 kube-system pods found
	I0817 22:25:53.411903  254975 system_pods.go:61] "coredns-5644d7b6d9-nz5d2" [5514f434-2c17-42dc-b35b-fef5bd6886fb] Running
	I0817 22:25:53.411909  254975 system_pods.go:61] "etcd-old-k8s-version-294781" [75919c29-02ae-46f6-8173-507b491d16da] Running
	I0817 22:25:53.411920  254975 system_pods.go:61] "kube-apiserver-old-k8s-version-294781" [f6d458ca-a84f-40dc-8b6a-b53fb8062c50] Running
	I0817 22:25:53.411930  254975 system_pods.go:61] "kube-controller-manager-old-k8s-version-294781" [0827f676-c11c-44b1-9bca-f8f905448490] Pending
	I0817 22:25:53.411937  254975 system_pods.go:61] "kube-proxy-f2bdh" [8b0dfe14-026a-44e1-9c6f-7f16fb61f90e] Running
	I0817 22:25:53.411943  254975 system_pods.go:61] "kube-scheduler-old-k8s-version-294781" [9ced2a30-44a8-421f-94ef-19be20b58c5d] Running
	I0817 22:25:53.411947  254975 system_pods.go:61] "storage-provisioner" [c9c05cca-5426-4071-a408-815c723a76f3] Running
	I0817 22:25:53.411954  254975 system_pods.go:74] duration metric: took 14.318728ms to wait for pod list to return data ...
	I0817 22:25:53.411961  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:53.415672  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:53.415715  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:53.415731  254975 node_conditions.go:105] duration metric: took 3.76549ms to run NodePressure ...
	I0817 22:25:53.415758  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:53.808911  254975 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:53.814276  254975 retry.go:31] will retry after 200.301174ms: kubelet not initialised
	I0817 22:25:54.020423  254975 retry.go:31] will retry after 376.047728ms: kubelet not initialised
	I0817 22:25:54.401967  254975 retry.go:31] will retry after 672.586884ms: kubelet not initialised
	I0817 22:25:55.079229  254975 retry.go:31] will retry after 1.101994757s: kubelet not initialised
	I0817 22:25:56.186236  254975 retry.go:31] will retry after 770.380926ms: kubelet not initialised
	I0817 22:25:53.777865  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.275799  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:54.690880  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.189416  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.242279  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.742604  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.961679  254975 retry.go:31] will retry after 2.235217601s: kubelet not initialised
	I0817 22:25:59.205012  254975 retry.go:31] will retry after 2.063266757s: kubelet not initialised
	I0817 22:26:01.275712  254975 retry.go:31] will retry after 5.105867057s: kubelet not initialised
	I0817 22:25:58.774815  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.275856  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.190180  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.692286  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.744707  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.240683  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.388158  254975 retry.go:31] will retry after 3.608427827s: kubelet not initialised
	I0817 22:26:03.775281  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.274839  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.190713  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.689980  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.742399  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.742739  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.004038  254975 retry.go:31] will retry after 8.940252852s: kubelet not initialised
	I0817 22:26:08.275499  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.275871  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.696436  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:11.189718  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.240363  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.241894  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:12.776238  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.274945  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.690119  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:16.189786  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:17.741982  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:20.242289  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.951040  254975 retry.go:31] will retry after 14.553103306s: kubelet not initialised
	I0817 22:26:17.774269  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:19.775075  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.274390  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.690720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:21.191013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.242355  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.742592  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.275310  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:26.774906  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:23.690032  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:25.690127  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.692342  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.243421  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:29.245714  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:28.777378  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.274134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:30.189730  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:32.689849  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.741791  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.240900  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:36.241988  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:33.521718  254975 kubeadm.go:787] kubelet initialised
	I0817 22:26:33.521745  254975 kubeadm.go:788] duration metric: took 39.712803989s waiting for restarted kubelet to initialise ...
	I0817 22:26:33.521755  254975 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:26:33.535522  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545447  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.545474  254975 pod_ready.go:81] duration metric: took 9.918514ms waiting for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545487  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551823  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.551853  254975 pod_ready.go:81] duration metric: took 6.357251ms waiting for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551867  254975 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559246  254975 pod_ready.go:92] pod "etcd-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.559278  254975 pod_ready.go:81] duration metric: took 7.402957ms waiting for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559291  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565344  254975 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.565373  254975 pod_ready.go:81] duration metric: took 6.072723ms waiting for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565387  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909036  254975 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.909073  254975 pod_ready.go:81] duration metric: took 343.677116ms waiting for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909089  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308592  254975 pod_ready.go:92] pod "kube-proxy-f2bdh" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.308619  254975 pod_ready.go:81] duration metric: took 399.522419ms waiting for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308630  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708489  254975 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.708517  254975 pod_ready.go:81] duration metric: took 399.879822ms waiting for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708528  254975 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.275646  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:35.774730  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.692013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.191914  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.242929  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.741450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.516268  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.275712  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.774133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.690461  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:41.690828  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.242204  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.741216  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:42.016209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.516019  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.275668  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.776837  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.189846  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:46.691439  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.742285  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.241123  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.016817  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.517406  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:48.276244  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.774977  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.189105  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:51.190270  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.241800  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.739978  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.016631  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.515565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.516890  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.274258  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.278000  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.192619  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.693990  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.742737  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.241115  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.241654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.015461  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.017347  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:57.775264  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.775399  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.776382  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:58.190121  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:00.190792  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:02.697428  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.741654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.742940  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.516565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.516966  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:04.275212  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:06.277355  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.190366  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:07.190973  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.244485  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.741985  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.015202  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.016691  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.774384  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.774729  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:09.692011  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.190853  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.742313  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:15.241577  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.514881  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.516950  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.517383  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.774867  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.775482  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.274793  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.689813  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.692012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.243159  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.517518  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.016576  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.275829  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.276653  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.692315  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.189564  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:22.240740  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:24.241960  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.242201  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.017348  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.515756  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.775957  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.275937  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.189646  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.690338  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.690947  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.741912  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.742165  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.516071  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.517838  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.276630  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.775134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.691012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:31.696187  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:33.241142  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:35.243536  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.017452  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.515974  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.516450  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.775448  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.775822  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.274968  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.188369  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.188928  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.741436  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.741983  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.015982  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.516526  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.278879  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.774782  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:38.189378  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:40.695851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:42.240995  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.741178  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.015737  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.018254  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.776276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.276133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.188678  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:45.189618  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:47.191825  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.741669  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.241194  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.242571  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.516687  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.016735  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.277486  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:50.775420  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.689852  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.691216  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.741209  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.743232  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.518209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.016075  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.275443  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.774204  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.692276  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.190072  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.242009  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:00.242183  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.516449  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.016290  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:57.775327  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:59.775642  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.275827  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.691467  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.189998  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.740875  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.742481  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.523305  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.016025  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.275917  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.777604  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.190940  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:05.690559  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.693124  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.241721  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.241889  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:08.017490  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.018815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.274176  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.275009  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.190851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.689465  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.741056  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.241846  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:16.243898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.516550  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.017547  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:13.276368  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.773960  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.690587  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.189824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:18.742657  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.243561  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.515978  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:20.016035  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.774474  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.776240  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.275209  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.194335  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.691142  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:23.743251  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.241450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.021055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.516645  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.776861  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.274029  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.189740  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.691801  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:28.242364  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:30.740610  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.016851  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.017289  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.517096  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.774126  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.275287  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.189744  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.691190  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.741643  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:35.242108  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.015792  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.016247  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.773849  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.777072  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:33.692774  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.189115  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:37.741756  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.244685  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.016815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.017616  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:39.276756  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:41.774190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.190001  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.690824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.742547  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.241354  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.518073  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.016560  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.776627  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:46.275092  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.189166  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.692178  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.697772  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.242829  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.741555  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.516429  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.516588  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:48.775347  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:51.274069  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:50.191415  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.694362  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.242367  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.742705  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.019113  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.516748  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:53.275190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.773511  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.189720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.189811  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.241152  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.242170  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.015866  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.016464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.515901  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.776667  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:00.273941  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.190719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.190988  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.741107  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.742524  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.243093  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.516444  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.017964  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:02.775583  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.280071  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.690586  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.643882  255057 pod_ready.go:81] duration metric: took 4m0.000182343s waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:05.643921  255057 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:05.643932  255057 pod_ready.go:38] duration metric: took 4m2.754707603s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:05.643956  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:29:05.643998  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:05.644060  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:05.703194  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:05.703221  255057 cri.go:89] found id: ""
	I0817 22:29:05.703229  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:05.703283  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.708602  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:05.708676  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:05.747581  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:05.747610  255057 cri.go:89] found id: ""
	I0817 22:29:05.747619  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:05.747692  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.753231  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:05.753331  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:05.795460  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:05.795489  255057 cri.go:89] found id: ""
	I0817 22:29:05.795499  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:05.795562  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.801181  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:05.801268  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:05.840433  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:05.840463  255057 cri.go:89] found id: ""
	I0817 22:29:05.840472  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:05.840546  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.845974  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:05.846039  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:05.886216  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:05.886243  255057 cri.go:89] found id: ""
	I0817 22:29:05.886252  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:05.886314  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.891204  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:05.891286  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:05.927636  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:05.927661  255057 cri.go:89] found id: ""
	I0817 22:29:05.927669  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:05.927732  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.932173  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:05.932230  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:05.963603  255057 cri.go:89] found id: ""
	I0817 22:29:05.963634  255057 logs.go:284] 0 containers: []
	W0817 22:29:05.963646  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:05.963654  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:05.963727  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:05.996465  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:05.996489  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:05.996496  255057 cri.go:89] found id: ""
	I0817 22:29:05.996505  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:05.996572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.001291  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.006314  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:06.006348  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:06.051348  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:06.051386  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:06.226315  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:06.226362  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:06.263289  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:06.263321  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:06.308223  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:06.308262  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:06.346964  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:06.347001  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:06.382834  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:06.382878  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:06.431491  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:06.431527  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:06.485901  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:06.485948  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:07.054256  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:07.054315  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:07.093229  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093417  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093570  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093737  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.119377  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:07.119420  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:07.137712  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:07.137756  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:07.187463  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:07.187511  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:07.252728  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252775  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:07.252844  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:07.252856  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252865  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252872  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252878  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.252884  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252890  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:08.741270  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:11.245029  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:08.516388  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:10.518542  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:07.775391  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:09.775841  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:12.276748  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.741788  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:16.242264  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.018983  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:15.516221  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.774832  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.967926  255215 pod_ready.go:81] duration metric: took 4m0.000797383s waiting for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:14.967968  255215 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:14.967995  255215 pod_ready.go:38] duration metric: took 4m12.638851973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:14.968025  255215 kubeadm.go:640] restartCluster took 4m34.07416066s
	W0817 22:29:14.968112  255215 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:14.968150  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:17.254245  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:29:17.278452  255057 api_server.go:72] duration metric: took 4m21.775005609s to wait for apiserver process to appear ...
	I0817 22:29:17.278488  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:29:17.278540  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:17.278675  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:17.317529  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:17.317554  255057 cri.go:89] found id: ""
	I0817 22:29:17.317562  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:17.317626  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.323505  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:17.323593  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:17.367258  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.367282  255057 cri.go:89] found id: ""
	I0817 22:29:17.367290  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:17.367355  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.372332  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:17.372424  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:17.406884  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:17.406914  255057 cri.go:89] found id: ""
	I0817 22:29:17.406923  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:17.406990  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.411562  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:17.411626  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:17.452516  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.452549  255057 cri.go:89] found id: ""
	I0817 22:29:17.452560  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:17.452654  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.458237  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:17.458327  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:17.498524  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:17.498550  255057 cri.go:89] found id: ""
	I0817 22:29:17.498559  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:17.498621  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.504941  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:17.505024  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:17.543542  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.543570  255057 cri.go:89] found id: ""
	I0817 22:29:17.543580  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:17.543646  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.548420  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:17.548488  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:17.589411  255057 cri.go:89] found id: ""
	I0817 22:29:17.589441  255057 logs.go:284] 0 containers: []
	W0817 22:29:17.589449  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:17.589455  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:17.589520  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:17.624044  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:17.624075  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.624083  255057 cri.go:89] found id: ""
	I0817 22:29:17.624092  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:17.624160  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.631040  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.635336  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:17.635359  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:17.688966  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689294  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689576  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689899  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:17.729861  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:17.729923  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:17.746619  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:17.746663  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.805149  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:17.805198  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.842639  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:17.842673  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.905357  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:17.905406  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.943860  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:17.943893  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:18.242331  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:20.742262  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:17.517585  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:19.519464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:18.114000  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:18.114038  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:18.176549  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:18.176602  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:18.211903  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:18.211947  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:18.246566  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:18.246600  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:18.280810  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:18.280853  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:18.831902  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:18.831957  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:18.883170  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883219  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:18.883304  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:18.883323  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883336  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883352  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883364  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:18.883382  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883391  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:23.242587  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:25.742126  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:22.017269  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:24.017806  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:26.516458  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.241489  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:30.741723  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.516703  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:31.016367  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.884252  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:29:28.889957  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:29:28.891532  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:29:28.891560  255057 api_server.go:131] duration metric: took 11.613062869s to wait for apiserver health ...
	I0817 22:29:28.891571  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:29:28.891602  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:28.891669  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:28.927462  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:28.927496  255057 cri.go:89] found id: ""
	I0817 22:29:28.927506  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:28.927572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.932195  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:28.932284  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:28.974041  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:28.974092  255057 cri.go:89] found id: ""
	I0817 22:29:28.974103  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:28.974172  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.978230  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:28.978302  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:29.012431  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.012459  255057 cri.go:89] found id: ""
	I0817 22:29:29.012469  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:29.012539  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.017232  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:29.017311  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:29.051208  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.051235  255057 cri.go:89] found id: ""
	I0817 22:29:29.051242  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:29.051292  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.056125  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:29.056193  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:29.094165  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.094196  255057 cri.go:89] found id: ""
	I0817 22:29:29.094207  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:29.094277  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.098992  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:29.099054  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:29.138522  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.138552  255057 cri.go:89] found id: ""
	I0817 22:29:29.138561  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:29.138614  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.143075  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:29.143159  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:29.177797  255057 cri.go:89] found id: ""
	I0817 22:29:29.177831  255057 logs.go:284] 0 containers: []
	W0817 22:29:29.177842  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:29.177850  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:29.177916  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:29.208897  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.208922  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.208928  255057 cri.go:89] found id: ""
	I0817 22:29:29.208937  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:29.209008  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.213083  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.217020  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:29.217043  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:29.253559  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253779  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253989  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.254225  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:29.280705  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:29.280746  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:29.295400  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:29.295432  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:29.344222  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:29.344268  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:29.482768  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:29.482812  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:29.541274  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:29.541317  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.577842  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:29.577876  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.613556  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:29.613595  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.654840  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:29.654886  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.711929  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:29.711974  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.749746  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:29.749802  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.782899  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:29.782932  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:30.286425  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:30.286488  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:30.328588  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328616  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:30.328686  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:30.328701  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328715  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328729  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328745  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:30.328754  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328762  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:32.741952  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.241640  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:33.516723  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.516887  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.339646  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:29:40.339676  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.339681  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.339685  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.339690  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.339694  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.339698  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.339705  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.339711  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.339722  255057 system_pods.go:74] duration metric: took 11.448139171s to wait for pod list to return data ...
	I0817 22:29:40.339730  255057 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:29:40.344246  255057 default_sa.go:45] found service account: "default"
	I0817 22:29:40.344271  255057 default_sa.go:55] duration metric: took 4.534553ms for default service account to be created ...
	I0817 22:29:40.344280  255057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:29:40.353485  255057 system_pods.go:86] 8 kube-system pods found
	I0817 22:29:40.353521  255057 system_pods.go:89] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.353529  255057 system_pods.go:89] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.353537  255057 system_pods.go:89] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.353546  255057 system_pods.go:89] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.353553  255057 system_pods.go:89] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.353560  255057 system_pods.go:89] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.353579  255057 system_pods.go:89] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.353589  255057 system_pods.go:89] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.353598  255057 system_pods.go:126] duration metric: took 9.313259ms to wait for k8s-apps to be running ...
	I0817 22:29:40.353612  255057 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:29:40.353685  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:40.376714  255057 system_svc.go:56] duration metric: took 23.088082ms WaitForService to wait for kubelet.
	I0817 22:29:40.376759  255057 kubeadm.go:581] duration metric: took 4m44.873323742s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:29:40.377191  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:29:40.385016  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:29:40.385043  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:29:40.385055  255057 node_conditions.go:105] duration metric: took 7.857619ms to run NodePressure ...
	I0817 22:29:40.385068  255057 start.go:228] waiting for startup goroutines ...
	I0817 22:29:40.385074  255057 start.go:233] waiting for cluster config update ...
	I0817 22:29:40.385085  255057 start.go:242] writing updated cluster config ...
	I0817 22:29:40.385411  255057 ssh_runner.go:195] Run: rm -f paused
	I0817 22:29:40.457420  255057 start.go:600] kubectl: 1.28.0, cluster: 1.28.0-rc.1 (minor skew: 0)
	I0817 22:29:40.460043  255057 out.go:177] * Done! kubectl is now configured to use "no-preload-525875" cluster and "default" namespace by default
	I0817 22:29:37.242898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:37.462917  255491 pod_ready.go:81] duration metric: took 4m0.00026087s waiting for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:37.462956  255491 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:37.463009  255491 pod_ready.go:38] duration metric: took 4m10.583985022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:37.463050  255491 kubeadm.go:640] restartCluster took 4m32.042723788s
	W0817 22:29:37.463141  255491 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:37.463185  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:37.517852  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.016790  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:42.517001  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:45.016757  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:47.291163  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.322979002s)
	I0817 22:29:47.291246  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:47.305948  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:29:47.316036  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:29:47.325470  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:29:47.325519  255215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:29:47.566297  255215 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:29:47.017112  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:49.017246  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:51.018095  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:53.519020  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:56.016627  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.087786  255215 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:29:59.087860  255215 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:29:59.087991  255215 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:29:59.088169  255215 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:29:59.088306  255215 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:29:59.088388  255215 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:29:59.090358  255215 out.go:204]   - Generating certificates and keys ...
	I0817 22:29:59.090460  255215 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:29:59.090547  255215 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:29:59.090660  255215 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:29:59.090766  255215 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:29:59.090886  255215 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:29:59.090976  255215 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:29:59.091060  255215 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:29:59.091152  255215 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:29:59.091250  255215 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:29:59.091350  255215 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:29:59.091435  255215 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:29:59.091514  255215 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:29:59.091589  255215 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:29:59.091655  255215 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:29:59.091759  255215 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:29:59.091836  255215 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:29:59.091960  255215 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:29:59.092070  255215 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:29:59.092127  255215 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:29:59.092207  255215 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:29:59.094268  255215 out.go:204]   - Booting up control plane ...
	I0817 22:29:59.094408  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:29:59.094513  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:29:59.094594  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:29:59.094719  255215 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:29:59.094944  255215 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:29:59.095031  255215 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504676 seconds
	I0817 22:29:59.095206  255215 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:29:59.095401  255215 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:29:59.095494  255215 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:29:59.095757  255215 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-437183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:29:59.095844  255215 kubeadm.go:322] [bootstrap-token] Using token: 0fftkt.nm31ryo8p4990tdr
	I0817 22:29:59.097581  255215 out.go:204]   - Configuring RBAC rules ...
	I0817 22:29:59.097750  255215 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:29:59.097884  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:29:59.098097  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:29:59.098258  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:29:59.098405  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:29:59.098510  255215 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:29:59.098679  255215 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:29:59.098745  255215 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:29:59.098802  255215 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:29:59.098811  255215 kubeadm.go:322] 
	I0817 22:29:59.098889  255215 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:29:59.098898  255215 kubeadm.go:322] 
	I0817 22:29:59.099010  255215 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:29:59.099033  255215 kubeadm.go:322] 
	I0817 22:29:59.099069  255215 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:29:59.099142  255215 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:29:59.099221  255215 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:29:59.099232  255215 kubeadm.go:322] 
	I0817 22:29:59.099297  255215 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:29:59.099307  255215 kubeadm.go:322] 
	I0817 22:29:59.099365  255215 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:29:59.099374  255215 kubeadm.go:322] 
	I0817 22:29:59.099446  255215 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:29:59.099552  255215 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:29:59.099660  255215 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:29:59.099670  255215 kubeadm.go:322] 
	I0817 22:29:59.099799  255215 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:29:59.099909  255215 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:29:59.099917  255215 kubeadm.go:322] 
	I0817 22:29:59.100037  255215 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100173  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:29:59.100205  255215 kubeadm.go:322] 	--control-plane 
	I0817 22:29:59.100218  255215 kubeadm.go:322] 
	I0817 22:29:59.100348  255215 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:29:59.100359  255215 kubeadm.go:322] 
	I0817 22:29:59.100472  255215 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100610  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:29:59.100639  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:29:59.100650  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:29:59.102534  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:29:58.017949  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:00.519619  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.104107  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:29:59.128756  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:29:59.172002  255215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=embed-certs-437183 minikube.k8s.io/updated_at=2023_08_17T22_29_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.717974  255215 ops.go:34] apiserver oom_adj: -16
	I0817 22:29:59.718154  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.815994  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.419198  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.919196  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.419096  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.919517  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:02.419076  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.017120  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:05.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:02.919289  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.419268  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.919021  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.418663  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.919015  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.419325  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.919309  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.418701  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.919301  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.418670  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.919445  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.419363  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.918988  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.418788  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.918948  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.418731  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.919293  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.419374  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.578800  255215 kubeadm.go:1081] duration metric: took 12.40679081s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:11.578850  255215 kubeadm.go:406] StartCluster complete in 5m30.729798213s
	I0817 22:30:11.578877  255215 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.578990  255215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:11.581741  255215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.582107  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:11.582305  255215 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:11.582414  255215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-437183"
	I0817 22:30:11.582435  255215 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-437183"
	I0817 22:30:11.582433  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:11.582436  255215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-437183"
	I0817 22:30:11.582449  255215 addons.go:69] Setting metrics-server=true in profile "embed-certs-437183"
	I0817 22:30:11.582461  255215 addons.go:231] Setting addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:11.582465  255215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-437183"
	W0817 22:30:11.582467  255215 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:11.582521  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	W0817 22:30:11.582443  255215 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:11.582609  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.582956  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582976  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582992  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583000  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583326  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.583361  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.600606  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0817 22:30:11.601162  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.601890  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.601918  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.602386  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.603044  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.603110  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.603922  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0817 22:30:11.604193  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36125
	I0817 22:30:11.604476  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.604711  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.605320  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605342  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605474  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605500  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605874  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.605927  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.606184  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.606616  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.606654  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.622026  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0817 22:30:11.622822  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.623522  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.623556  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.624021  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.624332  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.626478  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.629171  255215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:11.627845  255215 addons.go:231] Setting addon default-storageclass=true in "embed-certs-437183"
	W0817 22:30:11.629212  255215 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:11.629267  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.628437  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0817 22:30:11.629683  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.631294  255215 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.631295  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.629905  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.631315  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:11.631339  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.632333  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.632356  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.632860  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.633085  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.635520  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.635727  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.638116  255215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:09.776936  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.313725935s)
	I0817 22:30:09.777008  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:09.794808  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:09.806086  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:09.818495  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:09.818547  255491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:30:10.061316  255491 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:30:11.636353  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.636644  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.640483  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.640486  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:11.640508  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:11.640535  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.640703  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.640905  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.641073  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.645685  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646351  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.646376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646867  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.647096  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.647286  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.647444  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.655819  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0817 22:30:11.656540  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.657308  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.657326  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.657864  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.658485  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.658520  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.679610  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0817 22:30:11.680268  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.680977  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.681013  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.681485  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.681722  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.683711  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.686274  255215 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.686297  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:11.686323  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.692154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.692160  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692245  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.692288  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692447  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.692691  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.692899  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.742259  255215 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-437183" context rescaled to 1 replicas
	I0817 22:30:11.742317  255215 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:11.744647  255215 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:07.516999  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:10.016647  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:11.746674  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:11.833127  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.853282  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:11.853316  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:11.858219  255215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.858353  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:11.889330  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.896554  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:11.896595  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:11.906260  255215 node_ready.go:49] node "embed-certs-437183" has status "Ready":"True"
	I0817 22:30:11.906292  255215 node_ready.go:38] duration metric: took 48.027482ms waiting for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.906305  255215 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:11.949379  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:11.949409  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:12.023543  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:12.131426  255215 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:14.420517  255215 pod_ready.go:102] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.647805  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.814629092s)
	I0817 22:30:14.647842  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.78945104s)
	I0817 22:30:14.647874  255215 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:14.647904  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.758517925s)
	I0817 22:30:14.647915  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648017  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648042  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648067  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648478  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.648532  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.648626  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.648638  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648656  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648882  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.649025  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.649050  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.649069  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.650529  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.650577  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.650586  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.650600  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.650614  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.651171  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.651230  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.652509  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652529  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.652688  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652708  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.175766  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.152137099s)
	I0817 22:30:15.175888  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.175915  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176344  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.176343  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.176428  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.176452  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.176488  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176915  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.178804  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.178827  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.178840  255215 addons.go:467] Verifying addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:15.180928  255215 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:30:12.018605  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.519226  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:15.182515  255215 addons.go:502] enable addons completed in 3.600222172s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:30:16.920634  255215 pod_ready.go:92] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.920664  255215 pod_ready.go:81] duration metric: took 4.789200515s waiting for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.920674  255215 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937440  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.937469  255215 pod_ready.go:81] duration metric: took 16.789093ms waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937483  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944411  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.944437  255215 pod_ready.go:81] duration metric: took 6.944986ms waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944451  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952239  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.952267  255215 pod_ready.go:81] duration metric: took 7.807798ms waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952281  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815597  255215 pod_ready.go:92] pod "kube-proxy-2f6jz" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:17.815630  255215 pod_ready.go:81] duration metric: took 863.340907ms waiting for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815644  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108648  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:18.108683  255215 pod_ready.go:81] duration metric: took 293.029473ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108693  255215 pod_ready.go:38] duration metric: took 6.202373203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:18.108726  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:18.108789  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:18.129379  255215 api_server.go:72] duration metric: took 6.38701969s to wait for apiserver process to appear ...
	I0817 22:30:18.129409  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:18.129425  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:30:18.138226  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:30:18.141542  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:18.141568  255215 api_server.go:131] duration metric: took 12.152138ms to wait for apiserver health ...
	I0817 22:30:18.141579  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:18.312736  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:30:18.312782  255215 system_pods.go:61] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.312790  255215 system_pods.go:61] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.312798  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.312804  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.312811  255215 system_pods.go:61] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.312817  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.312831  255215 system_pods.go:61] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.312841  255215 system_pods.go:61] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.312855  255215 system_pods.go:74] duration metric: took 171.269837ms to wait for pod list to return data ...
	I0817 22:30:18.312868  255215 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:18.511271  255215 default_sa.go:45] found service account: "default"
	I0817 22:30:18.511380  255215 default_sa.go:55] duration metric: took 198.492073ms for default service account to be created ...
	I0817 22:30:18.511401  255215 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:18.710880  255215 system_pods.go:86] 8 kube-system pods found
	I0817 22:30:18.710911  255215 system_pods.go:89] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.710917  255215 system_pods.go:89] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.710921  255215 system_pods.go:89] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.710926  255215 system_pods.go:89] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.710929  255215 system_pods.go:89] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.710933  255215 system_pods.go:89] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.710943  255215 system_pods.go:89] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.710949  255215 system_pods.go:89] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.710958  255215 system_pods.go:126] duration metric: took 199.549571ms to wait for k8s-apps to be running ...
	I0817 22:30:18.710967  255215 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:18.711013  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:18.725788  255215 system_svc.go:56] duration metric: took 14.807351ms WaitForService to wait for kubelet.
	I0817 22:30:18.725819  255215 kubeadm.go:581] duration metric: took 6.983465617s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:18.725846  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:18.908038  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:18.908079  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:18.908093  255215 node_conditions.go:105] duration metric: took 182.240177ms to run NodePressure ...
	I0817 22:30:18.908108  255215 start.go:228] waiting for startup goroutines ...
	I0817 22:30:18.908127  255215 start.go:233] waiting for cluster config update ...
	I0817 22:30:18.908142  255215 start.go:242] writing updated cluster config ...
	I0817 22:30:18.908536  255215 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:18.962718  255215 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:18.965052  255215 out.go:177] * Done! kubectl is now configured to use "embed-certs-437183" cluster and "default" namespace by default
	I0817 22:30:17.018314  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:19.517055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:21.517216  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:22.302082  255491 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:30:22.302198  255491 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:22.302316  255491 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:22.302392  255491 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:22.302537  255491 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:22.302623  255491 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:22.304947  255491 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:22.305043  255491 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:22.305112  255491 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:22.305227  255491 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:22.305295  255491 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:22.305389  255491 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:22.305466  255491 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:22.305540  255491 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:22.305614  255491 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:22.305703  255491 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:22.305801  255491 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:22.305861  255491 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:22.305956  255491 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:22.306043  255491 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:22.306141  255491 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:22.306231  255491 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:22.306313  255491 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:22.306462  255491 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:22.306597  255491 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:22.306674  255491 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:30:22.306778  255491 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:22.308372  255491 out.go:204]   - Booting up control plane ...
	I0817 22:30:22.308478  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:22.308565  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:22.308644  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:22.308735  255491 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:22.308942  255491 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:22.309046  255491 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003655 seconds
	I0817 22:30:22.309195  255491 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:22.309352  255491 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:22.309430  255491 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:22.309656  255491 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-321287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:30:22.309729  255491 kubeadm.go:322] [bootstrap-token] Using token: vtugjh.yrdml71jezyixk01
	I0817 22:30:22.311499  255491 out.go:204]   - Configuring RBAC rules ...
	I0817 22:30:22.311610  255491 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:30:22.311706  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:30:22.311887  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:30:22.312069  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:30:22.312240  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:30:22.312338  255491 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:30:22.312462  255491 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:30:22.312516  255491 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:30:22.312583  255491 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:30:22.312595  255491 kubeadm.go:322] 
	I0817 22:30:22.312680  255491 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:30:22.312693  255491 kubeadm.go:322] 
	I0817 22:30:22.312798  255491 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:30:22.312806  255491 kubeadm.go:322] 
	I0817 22:30:22.312847  255491 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:30:22.312926  255491 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:30:22.313008  255491 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:30:22.313016  255491 kubeadm.go:322] 
	I0817 22:30:22.313073  255491 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:30:22.313092  255491 kubeadm.go:322] 
	I0817 22:30:22.313135  255491 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:30:22.313141  255491 kubeadm.go:322] 
	I0817 22:30:22.313180  255491 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:30:22.313271  255491 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:30:22.313397  255491 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:30:22.313421  255491 kubeadm.go:322] 
	I0817 22:30:22.313561  255491 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:30:22.313670  255491 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:30:22.313691  255491 kubeadm.go:322] 
	I0817 22:30:22.313790  255491 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.313910  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:30:22.313930  255491 kubeadm.go:322] 	--control-plane 
	I0817 22:30:22.313933  255491 kubeadm.go:322] 
	I0817 22:30:22.314017  255491 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:30:22.314029  255491 kubeadm.go:322] 
	I0817 22:30:22.314161  255491 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.314324  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:30:22.314342  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:30:22.314352  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:30:22.316092  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:30:22.317823  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:30:22.330216  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:30:22.364427  255491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:30:22.364530  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.364541  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=default-k8s-diff-port-321287 minikube.k8s.io/updated_at=2023_08_17T22_30_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.398800  255491 ops.go:34] apiserver oom_adj: -16
	I0817 22:30:22.789239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.908906  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.507279  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.007071  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.507204  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.007980  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.507764  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.007834  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.507449  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.518185  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:26.017066  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:27.007162  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:27.507978  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.008024  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.507376  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.007583  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.507355  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.007416  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.507014  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.007539  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.507116  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.516778  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:31.016979  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:32.007363  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:32.508019  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.007624  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.507337  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.007239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.507255  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.007804  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.507323  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.647403  255491 kubeadm.go:1081] duration metric: took 13.282950211s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:35.647439  255491 kubeadm.go:406] StartCluster complete in 5m30.275148595s
	I0817 22:30:35.647465  255491 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.647562  255491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:35.649294  255491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.649625  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:35.649672  255491 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:35.649793  255491 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649815  255491 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.649827  255491 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:35.649857  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:35.649897  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.649914  255491 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649931  255491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-321287"
	I0817 22:30:35.650130  255491 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.650154  255491 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.650163  255491 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:35.650207  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.650360  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650362  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650397  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650456  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650616  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650660  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.666863  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0817 22:30:35.666883  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0817 22:30:35.667444  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.667657  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.668085  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668105  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668245  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668256  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668780  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.669523  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.669553  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.670006  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:30:35.670382  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.670448  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.670513  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.670985  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.671005  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.671824  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.672870  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.672905  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.682146  255491 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.682167  255491 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:35.682200  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.682640  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.682674  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.690436  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0817 22:30:35.691039  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.691642  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.691666  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.692056  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.692328  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.692416  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0817 22:30:35.693048  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.693566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.693588  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.693974  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.694180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.694314  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.696623  255491 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:35.696015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.698535  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:35.698555  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:35.698593  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.700284  255491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:35.702071  255491 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.702097  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:35.702127  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.703050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.703111  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.703161  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703297  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.703498  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.703605  255491 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-321287" context rescaled to 1 replicas
	I0817 22:30:35.703641  255491 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:35.706989  255491 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:35.703707  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.707227  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.707832  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40363
	I0817 22:30:35.708116  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.709223  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:35.709358  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.709408  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.709426  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.709650  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.709767  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.709979  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.710587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.710608  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.711008  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.711578  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.711631  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.730317  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35051
	I0817 22:30:35.730875  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.731566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.731595  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.731993  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.732228  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.734475  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.734778  255491 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.734799  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:35.734822  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.737878  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.738359  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738478  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.739396  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.739599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.739850  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.902960  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.913205  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.936947  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:35.936977  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:35.977717  255491 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.977876  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:35.984231  255491 node_ready.go:49] node "default-k8s-diff-port-321287" has status "Ready":"True"
	I0817 22:30:35.984286  255491 node_ready.go:38] duration metric: took 6.524258ms waiting for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.984302  255491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:36.008884  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:36.008915  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:36.010024  255491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.073572  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.073607  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:36.139665  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.382827  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.382863  255491 pod_ready.go:81] duration metric: took 372.809939ms waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.382878  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513607  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.513640  255491 pod_ready.go:81] duration metric: took 130.752675ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513653  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610942  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.610974  255491 pod_ready.go:81] duration metric: took 97.312774ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610989  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:33.017198  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:34.709633  254975 pod_ready.go:81] duration metric: took 4m0.001081095s waiting for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	E0817 22:30:34.709679  254975 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:30:34.709709  254975 pod_ready.go:38] duration metric: took 4m1.187941338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:34.709762  254975 kubeadm.go:640] restartCluster took 5m3.210628062s
	W0817 22:30:34.709854  254975 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:30:34.709895  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:30:38.629738  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.716488882s)
	I0817 22:30:38.629799  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.651889874s)
	I0817 22:30:38.629829  255491 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:38.629802  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629871  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.629753  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.726738359s)
	I0817 22:30:38.629944  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629971  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630368  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630389  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630401  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630429  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630528  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630559  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630578  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630587  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630677  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.630707  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630732  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630973  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630991  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.631004  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.631007  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.631015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.632993  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.633019  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.633033  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.758987  255491 pod_ready.go:102] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:39.084274  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.944554423s)
	I0817 22:30:39.084336  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.084785  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.084799  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:39.084817  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.084829  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084842  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.085152  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.085168  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.085179  255491 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-321287"
	I0817 22:30:39.087296  255491 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:30:39.089202  255491 addons.go:502] enable addons completed in 3.439530445s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:30:41.238328  255491 pod_ready.go:92] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.238358  255491 pod_ready.go:81] duration metric: took 4.627360634s waiting for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.238376  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.244985  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.245011  255491 pod_ready.go:81] duration metric: took 6.626883ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.245022  255491 pod_ready.go:38] duration metric: took 5.260700173s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:41.245042  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:41.245097  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:41.262899  255491 api_server.go:72] duration metric: took 5.559222986s to wait for apiserver process to appear ...
	I0817 22:30:41.262935  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:41.262957  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:30:41.268642  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:30:41.269921  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:41.269947  255491 api_server.go:131] duration metric: took 7.005146ms to wait for apiserver health ...
	I0817 22:30:41.269955  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:41.276807  255491 system_pods.go:59] 9 kube-system pods found
	I0817 22:30:41.276844  255491 system_pods.go:61] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.276855  255491 system_pods.go:61] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.276863  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.276868  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.276875  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.276883  255491 system_pods.go:61] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.276890  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.276908  255491 system_pods.go:61] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.276918  255491 system_pods.go:61] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.276929  255491 system_pods.go:74] duration metric: took 6.967523ms to wait for pod list to return data ...
	I0817 22:30:41.276941  255491 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:41.279696  255491 default_sa.go:45] found service account: "default"
	I0817 22:30:41.279724  255491 default_sa.go:55] duration metric: took 2.773544ms for default service account to be created ...
	I0817 22:30:41.279735  255491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:41.286220  255491 system_pods.go:86] 9 kube-system pods found
	I0817 22:30:41.286258  255491 system_pods.go:89] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.286269  255491 system_pods.go:89] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.286277  255491 system_pods.go:89] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.286283  255491 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.286287  255491 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.286292  255491 system_pods.go:89] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.286296  255491 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.286302  255491 system_pods.go:89] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.286306  255491 system_pods.go:89] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.286316  255491 system_pods.go:126] duration metric: took 6.576272ms to wait for k8s-apps to be running ...
	I0817 22:30:41.286326  255491 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:41.286373  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:41.301841  255491 system_svc.go:56] duration metric: took 15.499888ms WaitForService to wait for kubelet.
	I0817 22:30:41.301874  255491 kubeadm.go:581] duration metric: took 5.598205886s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:41.301898  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:41.306253  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:41.306289  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:41.306300  255491 node_conditions.go:105] duration metric: took 4.396496ms to run NodePressure ...
	I0817 22:30:41.306311  255491 start.go:228] waiting for startup goroutines ...
	I0817 22:30:41.306320  255491 start.go:233] waiting for cluster config update ...
	I0817 22:30:41.306329  255491 start.go:242] writing updated cluster config ...
	I0817 22:30:41.306617  255491 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:41.363947  255491 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:41.366167  255491 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-321287" cluster and "default" namespace by default
	I0817 22:30:47.861835  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.151914062s)
	I0817 22:30:47.861926  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:47.877704  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:47.888385  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:47.898212  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:47.898269  254975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0817 22:30:47.957871  254975 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0817 22:30:47.958020  254975 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:48.121563  254975 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:48.121724  254975 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:48.121869  254975 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:48.316212  254975 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:48.316379  254975 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:48.324040  254975 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0817 22:30:48.453946  254975 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:48.456278  254975 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:48.456403  254975 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:48.456486  254975 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:48.456629  254975 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:48.456723  254975 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:48.456831  254975 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:48.456916  254975 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:48.456992  254975 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:48.457084  254975 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:48.457233  254975 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:48.457347  254975 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:48.457400  254975 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:48.457478  254975 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:48.599977  254975 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:48.760474  254975 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:48.873066  254975 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:48.958450  254975 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:48.959335  254975 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:48.961565  254975 out.go:204]   - Booting up control plane ...
	I0817 22:30:48.961672  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:48.972854  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:48.974149  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:48.975110  254975 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:48.981334  254975 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:58.986028  254975 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004044 seconds
	I0817 22:30:58.986232  254975 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:59.005484  254975 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:59.530563  254975 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:59.530730  254975 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-294781 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0817 22:31:00.039739  254975 kubeadm.go:322] [bootstrap-token] Using token: y5v57w.cds9r5wk990e6rgq
	I0817 22:31:00.041700  254975 out.go:204]   - Configuring RBAC rules ...
	I0817 22:31:00.041831  254975 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:31:00.051302  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:31:00.056478  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:31:00.060403  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:31:00.065454  254975 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:31:00.155583  254975 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:31:00.472429  254975 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:31:00.474442  254975 kubeadm.go:322] 
	I0817 22:31:00.474512  254975 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:31:00.474554  254975 kubeadm.go:322] 
	I0817 22:31:00.474671  254975 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:31:00.474686  254975 kubeadm.go:322] 
	I0817 22:31:00.474708  254975 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:31:00.474808  254975 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:31:00.474883  254975 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:31:00.474895  254975 kubeadm.go:322] 
	I0817 22:31:00.474973  254975 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:31:00.475082  254975 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:31:00.475179  254975 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:31:00.475193  254975 kubeadm.go:322] 
	I0817 22:31:00.475308  254975 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0817 22:31:00.475421  254975 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:31:00.475431  254975 kubeadm.go:322] 
	I0817 22:31:00.475551  254975 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.475696  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:31:00.475750  254975 kubeadm.go:322]     --control-plane 	  
	I0817 22:31:00.475759  254975 kubeadm.go:322] 
	I0817 22:31:00.475881  254975 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:31:00.475937  254975 kubeadm.go:322] 
	I0817 22:31:00.476044  254975 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.476196  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:31:00.476725  254975 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:31:00.476766  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:31:00.476782  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:31:00.478932  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:31:00.480754  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:31:00.496449  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:31:00.527578  254975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:31:00.527658  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.527769  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=old-k8s-version-294781 minikube.k8s.io/updated_at=2023_08_17T22_31_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.809784  254975 ops.go:34] apiserver oom_adj: -16
	I0817 22:31:00.809925  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.991957  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:01.627311  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.126890  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.626673  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.127657  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.627284  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.127320  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.627026  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.127336  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.626721  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.127279  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.626697  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.127307  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.626920  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.127266  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.626970  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.126923  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.626808  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.127298  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.627182  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.126639  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.626681  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.127321  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.626904  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.127274  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.627272  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.127457  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.627280  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.127333  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.231130  254975 kubeadm.go:1081] duration metric: took 14.703542822s to wait for elevateKubeSystemPrivileges.
	I0817 22:31:15.231183  254975 kubeadm.go:406] StartCluster complete in 5m43.780243338s
	I0817 22:31:15.231254  254975 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.231391  254975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:31:15.233245  254975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.233533  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:31:15.233848  254975 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:31:15.233927  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:31:15.233947  254975 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-294781"
	I0817 22:31:15.233968  254975 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-294781"
	W0817 22:31:15.233977  254975 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:31:15.233983  254975 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234001  254975 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234007  254975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-294781"
	I0817 22:31:15.234021  254975 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-294781"
	W0817 22:31:15.234040  254975 addons.go:240] addon metrics-server should already be in state true
	I0817 22:31:15.234075  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234097  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234576  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234581  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234650  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.252847  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0817 22:31:15.252891  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38451
	I0817 22:31:15.253743  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.253833  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.254616  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254632  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.254713  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0817 22:31:15.254887  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254906  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.255216  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255276  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.255294  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255865  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255872  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255960  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.255977  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.256400  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.256604  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.269860  254975 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-294781"
	W0817 22:31:15.269883  254975 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:31:15.269911  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.270304  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.270335  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.273014  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0817 22:31:15.273532  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.274114  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.274134  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.274549  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.274769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.276415  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.276491  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0817 22:31:15.278935  254975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:31:15.277380  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.278041  254975 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-294781" context rescaled to 1 replicas
	I0817 22:31:15.280642  254975 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:31:15.282441  254975 out.go:177] * Verifying Kubernetes components...
	I0817 22:31:15.280856  254975 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.281832  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.284263  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.284347  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:31:15.284348  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:31:15.284366  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.285256  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.285580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.288289  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.288456  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.290643  254975 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:31:15.289601  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.289769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.292678  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:31:15.292693  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:31:15.292721  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.292776  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.293060  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.293277  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.293791  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.297193  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0817 22:31:15.297816  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.298486  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.298506  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.298962  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.299508  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.299531  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.300275  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.300994  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.301024  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.301098  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.301296  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.301502  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.301651  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.321283  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0817 22:31:15.321876  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.322943  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.322971  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.323496  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.323842  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.326563  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.326910  254975 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.326933  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:31:15.326957  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.330190  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.330947  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.330978  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.331193  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.331422  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.331552  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.331681  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.497277  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.529500  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.531359  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:31:15.531381  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:31:15.585477  254975 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.585494  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:31:15.590969  254975 node_ready.go:49] node "old-k8s-version-294781" has status "Ready":"True"
	I0817 22:31:15.591001  254975 node_ready.go:38] duration metric: took 5.470452ms waiting for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.591012  254975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:15.594026  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:31:15.594077  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:31:15.596784  254975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:15.638420  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:15.638455  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:31:15.707735  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:16.690916  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.193582768s)
	I0817 22:31:16.690987  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691002  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691002  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161462189s)
	I0817 22:31:16.691042  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105375097s)
	I0817 22:31:16.691044  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691217  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691158  254975 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0817 22:31:16.691422  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691464  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691490  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691561  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691512  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691586  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691603  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691630  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691813  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691832  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692047  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692086  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692110  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.692130  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.692114  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.692460  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692480  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828440  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.120652237s)
	I0817 22:31:16.828511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828525  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.828913  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.828939  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828952  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828963  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.829228  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.829252  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.829264  254975 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-294781"
	I0817 22:31:16.829279  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.831430  254975 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:31:16.834005  254975 addons.go:502] enable addons completed in 1.600151352s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:31:17.618673  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.110224  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.610989  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.611015  254975 pod_ready.go:81] duration metric: took 5.014205232s waiting for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.611025  254975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616618  254975 pod_ready.go:92] pod "kube-proxy-44jmp" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.616639  254975 pod_ready.go:81] duration metric: took 5.608097ms waiting for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616646  254975 pod_ready.go:38] duration metric: took 5.025620457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:20.616695  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:31:20.616748  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:31:20.633102  254975 api_server.go:72] duration metric: took 5.352419031s to wait for apiserver process to appear ...
	I0817 22:31:20.633131  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:31:20.633152  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:31:20.640585  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:31:20.641784  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:31:20.641807  254975 api_server.go:131] duration metric: took 8.66923ms to wait for apiserver health ...
	I0817 22:31:20.641815  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:31:20.647851  254975 system_pods.go:59] 4 kube-system pods found
	I0817 22:31:20.647904  254975 system_pods.go:61] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.647909  254975 system_pods.go:61] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.647917  254975 system_pods.go:61] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.647923  254975 system_pods.go:61] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.647929  254975 system_pods.go:74] duration metric: took 6.108947ms to wait for pod list to return data ...
	I0817 22:31:20.647937  254975 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:31:20.651451  254975 default_sa.go:45] found service account: "default"
	I0817 22:31:20.651485  254975 default_sa.go:55] duration metric: took 3.540013ms for default service account to be created ...
	I0817 22:31:20.651496  254975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:31:20.655529  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.655556  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.655561  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.655567  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.655575  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.655593  254975 retry.go:31] will retry after 194.203175ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:20.860033  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.860063  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.860069  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.860076  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.860082  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.860098  254975 retry.go:31] will retry after 273.217607ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.138457  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.138483  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.138488  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.138494  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.138501  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.138520  254975 retry.go:31] will retry after 311.999616ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.455473  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.455507  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.455513  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.455519  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.455526  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.455542  254975 retry.go:31] will retry after 462.378441ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.922656  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.922695  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.922703  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.922714  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.922724  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.922743  254975 retry.go:31] will retry after 595.850716ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:22.525024  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:22.525067  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:22.525076  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:22.525087  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:22.525100  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:22.525123  254975 retry.go:31] will retry after 916.880182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:23.446648  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:23.446678  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:23.446684  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:23.446691  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:23.446697  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:23.446717  254975 retry.go:31] will retry after 1.080769148s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:24.532239  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:24.532270  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:24.532277  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:24.532287  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:24.532296  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:24.532325  254975 retry.go:31] will retry after 1.261174641s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:25.798397  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:25.798430  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:25.798435  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:25.798442  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:25.798449  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:25.798465  254975 retry.go:31] will retry after 1.383083099s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:27.187782  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:27.187816  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:27.187821  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:27.187828  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:27.187834  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:27.187852  254975 retry.go:31] will retry after 1.954135672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:29.148294  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:29.148325  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:29.148330  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:29.148337  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:29.148344  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:29.148359  254975 retry.go:31] will retry after 2.632641562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:31.786946  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:31.786981  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:31.786988  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:31.786998  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:31.787010  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:31.787030  254975 retry.go:31] will retry after 3.626446493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:35.421023  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:35.421053  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:35.421059  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:35.421065  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:35.421072  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:35.421089  254975 retry.go:31] will retry after 2.800907689s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:38.228118  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:38.228155  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:38.228165  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:38.228177  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:38.228187  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:38.228204  254975 retry.go:31] will retry after 3.699626464s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:41.932868  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:41.932902  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:41.932908  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:41.932915  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:41.932922  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:41.932939  254975 retry.go:31] will retry after 6.965217948s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:48.913824  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:48.913866  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:48.913875  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:48.913899  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:48.913909  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:48.913931  254975 retry.go:31] will retry after 7.880328521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:56.800829  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:56.800868  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:56.800876  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:56.800887  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:56.800893  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:56.800915  254975 retry.go:31] will retry after 7.054585059s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:32:03.878268  254975 system_pods.go:86] 7 kube-system pods found
	I0817 22:32:03.878297  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:03.878304  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Pending
	I0817 22:32:03.878308  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Pending
	I0817 22:32:03.878311  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:03.878316  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:03.878324  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:03.878331  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:03.878351  254975 retry.go:31] will retry after 13.129481457s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0817 22:32:17.015570  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:17.015609  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:17.015619  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:17.015627  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:17.015634  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Pending
	I0817 22:32:17.015640  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:17.015647  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:17.015672  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:17.015682  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:17.015709  254975 retry.go:31] will retry after 15.332291563s: missing components: kube-controller-manager
	I0817 22:32:32.354549  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:32.354587  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:32.354596  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:32.354603  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:32.354613  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Running
	I0817 22:32:32.354619  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:32.354626  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:32.354637  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:32.354646  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:32.354657  254975 system_pods.go:126] duration metric: took 1m11.703154434s to wait for k8s-apps to be running ...
	I0817 22:32:32.354700  254975 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:32:32.354766  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:32:32.372492  254975 system_svc.go:56] duration metric: took 17.765249ms WaitForService to wait for kubelet.
	I0817 22:32:32.372541  254975 kubeadm.go:581] duration metric: took 1m17.091866023s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:32:32.372573  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:32:32.377413  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:32:32.377442  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:32:32.377455  254975 node_conditions.go:105] duration metric: took 4.875282ms to run NodePressure ...
	I0817 22:32:32.377467  254975 start.go:228] waiting for startup goroutines ...
	I0817 22:32:32.377473  254975 start.go:233] waiting for cluster config update ...
	I0817 22:32:32.377483  254975 start.go:242] writing updated cluster config ...
	I0817 22:32:32.377828  254975 ssh_runner.go:195] Run: rm -f paused
	I0817 22:32:32.433865  254975 start.go:600] kubectl: 1.28.0, cluster: 1.16.0 (minor skew: 12)
	I0817 22:32:32.436131  254975 out.go:177] 
	W0817 22:32:32.437621  254975 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0817 22:32:32.439072  254975 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0817 22:32:32.440794  254975 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-294781" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 22:25:10 UTC, ends at Thu 2023-08-17 22:41:34 UTC. --
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.057982206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd15ba73-2d1d-4f27-b026-c4570583935d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.058149660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd15ba73-2d1d-4f27-b026-c4570583935d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.068118271Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=925b5cbd-8533-4634-a8c7-93ade01f1768 name=/runtime.v1alpha2.RuntimeService/Status
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.068247291Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=925b5cbd-8533-4634-a8c7-93ade01f1768 name=/runtime.v1alpha2.RuntimeService/Status
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.096905354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1952afda-c75a-464f-92af-cb9aa4fd6f24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.096998450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1952afda-c75a-464f-92af-cb9aa4fd6f24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.097186156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1952afda-c75a-464f-92af-cb9aa4fd6f24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.133889077Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7e1b2f46-be10-4704-bf94-4c5815b96e90 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.133979319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7e1b2f46-be10-4704-bf94-4c5815b96e90 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.134158064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7e1b2f46-be10-4704-bf94-4c5815b96e90 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.173063306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8ba244af-0a3f-43d2-a164-82b1dca70564 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.173152184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8ba244af-0a3f-43d2-a164-82b1dca70564 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.173317856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8ba244af-0a3f-43d2-a164-82b1dca70564 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.210201207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b52b7842-340f-4ce5-90ee-30a57c98ca8b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.210296016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b52b7842-340f-4ce5-90ee-30a57c98ca8b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.210470209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b52b7842-340f-4ce5-90ee-30a57c98ca8b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.246318509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7822ffb7-938e-435a-9133-3d738f670b73 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.246381808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7822ffb7-938e-435a-9133-3d738f670b73 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.246620362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7822ffb7-938e-435a-9133-3d738f670b73 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.281945653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=efd156fa-d3d8-4ef3-aebb-8c2e0034e92f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.282023670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=efd156fa-d3d8-4ef3-aebb-8c2e0034e92f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.282192052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=efd156fa-d3d8-4ef3-aebb-8c2e0034e92f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.314941337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5c951870-7df6-493c-842c-61a8c2f6d859 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.315010252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5c951870-7df6-493c-842c-61a8c2f6d859 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:41:34 old-k8s-version-294781 crio[711]: time="2023-08-17 22:41:34.315328434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5c951870-7df6-493c-842c-61a8c2f6d859 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	c3a070079a5db       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   edbf0319fa5c6
	1028581d1dbc5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   dd0043023ef65
	7b1fa03e7d897       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   d10bb4533f5b3
	72077201639f7       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   815dd72984ffc
	69cb530e82258       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   6156d77a5afd4
	c790de9f398ee       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   13f126d501c3f
	08d224b61e1f0       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   b280926ea75f5
	
	* 
	* ==> coredns [c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38] <==
	* .:53
	2023-08-17T22:31:18.297Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-08-17T22:31:18.297Z [INFO] CoreDNS-1.6.2
	2023-08-17T22:31:18.297Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-08-17T22:31:18.312Z [INFO] 127.0.0.1:51858 - 56078 "HINFO IN 1054767967733793581.3854523874822987122. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014180577s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-294781
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-294781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=old-k8s-version-294781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T22_31_00_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 22:30:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 22:40:55 +0000   Thu, 17 Aug 2023 22:30:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 22:40:55 +0000   Thu, 17 Aug 2023 22:30:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 22:40:55 +0000   Thu, 17 Aug 2023 22:30:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 22:40:55 +0000   Thu, 17 Aug 2023 22:30:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.56
	  Hostname:    old-k8s-version-294781
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 a33cc6505fd84c9f9ec3652fc9a21038
	 System UUID:                a33cc650-5fd8-4c9f-9ec3-652fc9a21038
	 Boot ID:                    1570635a-ff79-481b-860b-640904c2786a
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-b9p7t                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-294781                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m31s
	  kube-system                kube-apiserver-old-k8s-version-294781             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                kube-controller-manager-old-k8s-version-294781    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                kube-proxy-44jmp                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-294781             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                metrics-server-74d5856cc6-4nqrx                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-294781     Node old-k8s-version-294781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-294781     Node old-k8s-version-294781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-294781     Node old-k8s-version-294781 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-294781  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug17 22:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.101490] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.088903] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.579793] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154093] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.610403] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.094998] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.131091] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.139481] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.109287] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.248419] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +20.247432] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[  +0.470018] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +15.997654] kauditd_printk_skb: 3 callbacks suppressed
	[Aug17 22:26] kauditd_printk_skb: 2 callbacks suppressed
	[Aug17 22:30] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.618009] systemd-fstab-generator[3220]: Ignoring "noauto" for root device
	[Aug17 22:31] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6] <==
	* 2023-08-17 22:30:51.364126 I | raft: 3a1fc7f0094834a7 became follower at term 0
	2023-08-17 22:30:51.364158 I | raft: newRaft 3a1fc7f0094834a7 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-08-17 22:30:51.364174 I | raft: 3a1fc7f0094834a7 became follower at term 1
	2023-08-17 22:30:51.375653 W | auth: simple token is not cryptographically signed
	2023-08-17 22:30:51.382306 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-08-17 22:30:51.384378 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-17 22:30:51.384649 I | embed: listening for metrics on http://192.168.72.56:2381
	2023-08-17 22:30:51.384907 I | etcdserver: 3a1fc7f0094834a7 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-08-17 22:30:51.385590 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-17 22:30:51.385987 I | etcdserver/membership: added member 3a1fc7f0094834a7 [https://192.168.72.56:2380] to cluster 2e67888e462b31f7
	2023-08-17 22:30:51.964723 I | raft: 3a1fc7f0094834a7 is starting a new election at term 1
	2023-08-17 22:30:51.964786 I | raft: 3a1fc7f0094834a7 became candidate at term 2
	2023-08-17 22:30:51.964806 I | raft: 3a1fc7f0094834a7 received MsgVoteResp from 3a1fc7f0094834a7 at term 2
	2023-08-17 22:30:51.964822 I | raft: 3a1fc7f0094834a7 became leader at term 2
	2023-08-17 22:30:51.964829 I | raft: raft.node: 3a1fc7f0094834a7 elected leader 3a1fc7f0094834a7 at term 2
	2023-08-17 22:30:51.965079 I | etcdserver: setting up the initial cluster version to 3.3
	2023-08-17 22:30:51.966741 I | etcdserver: published {Name:old-k8s-version-294781 ClientURLs:[https://192.168.72.56:2379]} to cluster 2e67888e462b31f7
	2023-08-17 22:30:51.966989 I | embed: ready to serve client requests
	2023-08-17 22:30:51.967282 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-08-17 22:30:51.967374 I | etcdserver/api: enabled capabilities for version 3.3
	2023-08-17 22:30:51.967426 I | embed: ready to serve client requests
	2023-08-17 22:30:51.968326 I | embed: serving client requests on 192.168.72.56:2379
	2023-08-17 22:30:51.968498 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-17 22:40:51.993720 I | mvcc: store.index: compact 665
	2023-08-17 22:40:51.995985 I | mvcc: finished scheduled compaction at 665 (took 1.802222ms)
	
	* 
	* ==> kernel <==
	*  22:41:34 up 16 min,  0 users,  load average: 0.00, 0.11, 0.16
	Linux old-k8s-version-294781 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d] <==
	* I0817 22:34:18.486382       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:34:18.486800       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:34:18.486881       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:34:18.486907       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:35:56.376209       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:35:56.376364       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:35:56.376451       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:35:56.376464       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:36:56.376875       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:36:56.377205       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:36:56.377351       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:36:56.377399       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:38:56.378043       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:38:56.378156       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:38:56.378218       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:38:56.378226       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:40:56.381390       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:40:56.381563       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:40:56.381642       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:40:56.381668       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e] <==
	* E0817 22:35:17.193673       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:35:31.319469       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:35:47.446412       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:36:03.321802       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:36:17.698386       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:36:35.324143       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:36:47.951022       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:37:07.326845       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:37:18.203352       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:37:39.329113       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:37:48.455946       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:38:11.331190       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:38:18.708384       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:38:43.333876       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:38:48.961574       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:39:15.336389       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:39:19.213800       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:39:47.338956       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:39:49.465914       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:40:19.341408       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:40:19.718001       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0817 22:40:49.970113       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:40:51.343566       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:41:20.222656       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:41:23.346289       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb] <==
	* W0817 22:31:17.746266       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0817 22:31:17.773304       1 node.go:135] Successfully retrieved node IP: 192.168.72.56
	I0817 22:31:17.773360       1 server_others.go:149] Using iptables Proxier.
	I0817 22:31:17.775856       1 server.go:529] Version: v1.16.0
	I0817 22:31:17.783620       1 config.go:313] Starting service config controller
	I0817 22:31:17.783680       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0817 22:31:17.783809       1 config.go:131] Starting endpoints config controller
	I0817 22:31:17.783853       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0817 22:31:17.887713       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0817 22:31:17.888190       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731] <==
	* I0817 22:30:55.386607       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0817 22:30:55.386948       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0817 22:30:55.433097       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 22:30:55.433432       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 22:30:55.440045       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:55.440283       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 22:30:55.452906       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 22:30:55.453147       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 22:30:55.453376       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 22:30:55.453629       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 22:30:55.456944       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 22:30:55.457027       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 22:30:55.457279       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:56.434686       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 22:30:56.449732       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 22:30:56.451601       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:56.453272       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 22:30:56.455481       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 22:30:56.458659       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 22:30:56.460191       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 22:30:56.461432       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 22:30:56.462470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 22:30:56.463892       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 22:30:56.474979       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 22:31:14.898857       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 22:25:10 UTC, ends at Thu 2023-08-17 22:41:34 UTC. --
	Aug 17 22:37:00 old-k8s-version-294781 kubelet[3226]: E0817 22:37:00.857765    3226 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 17 22:37:00 old-k8s-version-294781 kubelet[3226]: E0817 22:37:00.857868    3226 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 17 22:37:00 old-k8s-version-294781 kubelet[3226]: E0817 22:37:00.857929    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Aug 17 22:37:12 old-k8s-version-294781 kubelet[3226]: E0817 22:37:12.835723    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:37:27 old-k8s-version-294781 kubelet[3226]: E0817 22:37:27.835835    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:37:38 old-k8s-version-294781 kubelet[3226]: E0817 22:37:38.835724    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:37:51 old-k8s-version-294781 kubelet[3226]: E0817 22:37:51.835438    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:38:05 old-k8s-version-294781 kubelet[3226]: E0817 22:38:05.835618    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:38:18 old-k8s-version-294781 kubelet[3226]: E0817 22:38:18.835370    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:38:31 old-k8s-version-294781 kubelet[3226]: E0817 22:38:31.835117    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:38:43 old-k8s-version-294781 kubelet[3226]: E0817 22:38:43.835966    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:38:58 old-k8s-version-294781 kubelet[3226]: E0817 22:38:58.835654    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:39:11 old-k8s-version-294781 kubelet[3226]: E0817 22:39:11.835882    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:39:24 old-k8s-version-294781 kubelet[3226]: E0817 22:39:24.835726    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:39:35 old-k8s-version-294781 kubelet[3226]: E0817 22:39:35.835310    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:39:47 old-k8s-version-294781 kubelet[3226]: E0817 22:39:47.835435    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:00 old-k8s-version-294781 kubelet[3226]: E0817 22:40:00.835842    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:14 old-k8s-version-294781 kubelet[3226]: E0817 22:40:14.835414    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:28 old-k8s-version-294781 kubelet[3226]: E0817 22:40:28.835710    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:41 old-k8s-version-294781 kubelet[3226]: E0817 22:40:41.835296    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:48 old-k8s-version-294781 kubelet[3226]: E0817 22:40:48.908728    3226 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Aug 17 22:40:52 old-k8s-version-294781 kubelet[3226]: E0817 22:40:52.835957    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:41:03 old-k8s-version-294781 kubelet[3226]: E0817 22:41:03.835404    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:41:17 old-k8s-version-294781 kubelet[3226]: E0817 22:41:17.839246    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:41:31 old-k8s-version-294781 kubelet[3226]: E0817 22:41:31.835355    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263] <==
	* I0817 22:31:18.313355       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 22:31:18.326286       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 22:31:18.326401       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 22:31:18.354240       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 22:31:18.355403       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-294781_65043a18-cf2f-4327-a44f-39d4d4062b92!
	I0817 22:31:18.356445       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ad1d782-fad3-4aeb-a59b-781a98197afa", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-294781_65043a18-cf2f-4327-a44f-39d4d4062b92 became leader
	I0817 22:31:18.456221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-294781_65043a18-cf2f-4327-a44f-39d4d4062b92!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-294781 -n old-k8s-version-294781
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-294781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-4nqrx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-294781 describe pod metrics-server-74d5856cc6-4nqrx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-294781 describe pod metrics-server-74d5856cc6-4nqrx: exit status 1 (76.522743ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-4nqrx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-294781 describe pod metrics-server-74d5856cc6-4nqrx: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (332.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-525875 -n no-preload-525875
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-08-17 22:44:15.283944069 +0000 UTC m=+5628.924711790
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-525875 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-525875 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.613µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-525875 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-525875 -n no-preload-525875
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-525875 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-525875 logs -n 25: (1.243169801s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo find                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo crio                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-975779                                       | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-340676 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | disable-driver-mounts-340676                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:17 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-294781        | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-525875             | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-437183            | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-321287  | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-294781             | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-525875                  | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:29 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-437183                 | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-321287       | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC | 17 Aug 23 22:30 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:43 UTC | 17 Aug 23 22:43 UTC |
	| start   | -p newest-cni-249978 --memory=2200 --alsologtostderr   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:43 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 22:43:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 22:43:29.225212  260271 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:43:29.225352  260271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:43:29.225363  260271 out.go:309] Setting ErrFile to fd 2...
	I0817 22:43:29.225367  260271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:43:29.225586  260271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:43:29.226371  260271 out.go:303] Setting JSON to false
	I0817 22:43:29.227414  260271 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26734,"bootTime":1692285475,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:43:29.227478  260271 start.go:138] virtualization: kvm guest
	I0817 22:43:29.230534  260271 out.go:177] * [newest-cni-249978] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:43:29.232697  260271 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:43:29.232714  260271 notify.go:220] Checking for updates...
	I0817 22:43:29.234644  260271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:43:29.236857  260271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:43:29.238439  260271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:43:29.240299  260271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:43:29.242356  260271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:43:29.244205  260271 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:43:29.244304  260271 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:43:29.244408  260271 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:43:29.244535  260271 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:43:29.282868  260271 out.go:177] * Using the kvm2 driver based on user configuration
	I0817 22:43:29.284863  260271 start.go:298] selected driver: kvm2
	I0817 22:43:29.284897  260271 start.go:902] validating driver "kvm2" against <nil>
	I0817 22:43:29.284910  260271 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:43:29.285624  260271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:43:29.285728  260271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:43:29.303995  260271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:43:29.304053  260271 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0817 22:43:29.304082  260271 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0817 22:43:29.304400  260271 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0817 22:43:29.304454  260271 cni.go:84] Creating CNI manager for ""
	I0817 22:43:29.304473  260271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:43:29.304493  260271 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0817 22:43:29.304511  260271 start_flags.go:319] config:
	{Name:newest-cni-249978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-249978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netwo
rkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0}
	I0817 22:43:29.304756  260271 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:43:29.307553  260271 out.go:177] * Starting control plane node newest-cni-249978 in cluster newest-cni-249978
	I0817 22:43:29.309282  260271 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:43:29.309347  260271 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0817 22:43:29.309359  260271 cache.go:57] Caching tarball of preloaded images
	I0817 22:43:29.309467  260271 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 22:43:29.309478  260271 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on crio
	I0817 22:43:29.309601  260271 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/config.json ...
	I0817 22:43:29.309623  260271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/config.json: {Name:mk256338f41ad1af00ef21e77d725daeb24732fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:43:29.309773  260271 start.go:365] acquiring machines lock for newest-cni-249978: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:43:29.309802  260271 start.go:369] acquired machines lock for "newest-cni-249978" in 15.914µs
	I0817 22:43:29.309819  260271 start.go:93] Provisioning new machine with config: &{Name:newest-cni-249978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 Clus
terName:newest-cni-249978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:43:29.309888  260271 start.go:125] createHost starting for "" (driver="kvm2")
	I0817 22:43:29.312169  260271 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0817 22:43:29.312379  260271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:43:29.312432  260271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:43:29.328398  260271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0817 22:43:29.328919  260271 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:43:29.329591  260271 main.go:141] libmachine: Using API Version  1
	I0817 22:43:29.329618  260271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:43:29.329944  260271 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:43:29.330131  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:43:29.330344  260271 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:43:29.330560  260271 start.go:159] libmachine.API.Create for "newest-cni-249978" (driver="kvm2")
	I0817 22:43:29.330586  260271 client.go:168] LocalClient.Create starting
	I0817 22:43:29.330622  260271 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem
	I0817 22:43:29.330685  260271 main.go:141] libmachine: Decoding PEM data...
	I0817 22:43:29.330701  260271 main.go:141] libmachine: Parsing certificate...
	I0817 22:43:29.330757  260271 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem
	I0817 22:43:29.330783  260271 main.go:141] libmachine: Decoding PEM data...
	I0817 22:43:29.330795  260271 main.go:141] libmachine: Parsing certificate...
	I0817 22:43:29.330812  260271 main.go:141] libmachine: Running pre-create checks...
	I0817 22:43:29.330821  260271 main.go:141] libmachine: (newest-cni-249978) Calling .PreCreateCheck
	I0817 22:43:29.331129  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetConfigRaw
	I0817 22:43:29.331677  260271 main.go:141] libmachine: Creating machine...
	I0817 22:43:29.331693  260271 main.go:141] libmachine: (newest-cni-249978) Calling .Create
	I0817 22:43:29.331864  260271 main.go:141] libmachine: (newest-cni-249978) Creating KVM machine...
	I0817 22:43:29.333535  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found existing default KVM network
	I0817 22:43:29.334959  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:29.334789  260294 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e5:a7:28} reservation:<nil>}
	I0817 22:43:29.336127  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:29.336032  260294 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:37:28:74} reservation:<nil>}
	I0817 22:43:29.336881  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:29.336811  260294 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:06:41} reservation:<nil>}
	I0817 22:43:29.338177  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:29.338091  260294 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002bb8c0}
	I0817 22:43:29.344476  260271 main.go:141] libmachine: (newest-cni-249978) DBG | trying to create private KVM network mk-newest-cni-249978 192.168.72.0/24...
	I0817 22:43:29.431818  260271 main.go:141] libmachine: (newest-cni-249978) DBG | private KVM network mk-newest-cni-249978 192.168.72.0/24 created
	I0817 22:43:29.431869  260271 main.go:141] libmachine: (newest-cni-249978) Setting up store path in /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978 ...
	I0817 22:43:29.431882  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:29.431815  260294 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:43:29.431897  260271 main.go:141] libmachine: (newest-cni-249978) Building disk image from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0817 22:43:29.431996  260271 main.go:141] libmachine: (newest-cni-249978) Downloading /home/jenkins/minikube-integration/16865-203458/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0817 22:43:29.669187  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:29.669030  260294 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa...
	I0817 22:43:29.868082  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:29.867919  260294 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/newest-cni-249978.rawdisk...
	I0817 22:43:29.868126  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Writing magic tar header
	I0817 22:43:29.868145  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Writing SSH key tar header
	I0817 22:43:29.868164  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:29.868091  260294 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978 ...
	I0817 22:43:29.868284  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978
	I0817 22:43:29.868305  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube/machines
	I0817 22:43:29.868315  260271 main.go:141] libmachine: (newest-cni-249978) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978 (perms=drwx------)
	I0817 22:43:29.868325  260271 main.go:141] libmachine: (newest-cni-249978) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube/machines (perms=drwxr-xr-x)
	I0817 22:43:29.868335  260271 main.go:141] libmachine: (newest-cni-249978) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458/.minikube (perms=drwxr-xr-x)
	I0817 22:43:29.868345  260271 main.go:141] libmachine: (newest-cni-249978) Setting executable bit set on /home/jenkins/minikube-integration/16865-203458 (perms=drwxrwxr-x)
	I0817 22:43:29.868372  260271 main.go:141] libmachine: (newest-cni-249978) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0817 22:43:29.868384  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:43:29.868393  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16865-203458
	I0817 22:43:29.868403  260271 main.go:141] libmachine: (newest-cni-249978) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0817 22:43:29.868409  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0817 22:43:29.868420  260271 main.go:141] libmachine: (newest-cni-249978) Creating domain...
	I0817 22:43:29.868427  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Checking permissions on dir: /home/jenkins
	I0817 22:43:29.868436  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Checking permissions on dir: /home
	I0817 22:43:29.868442  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Skipping /home - not owner
	I0817 22:43:29.869648  260271 main.go:141] libmachine: (newest-cni-249978) define libvirt domain using xml: 
	I0817 22:43:29.869685  260271 main.go:141] libmachine: (newest-cni-249978) <domain type='kvm'>
	I0817 22:43:29.869698  260271 main.go:141] libmachine: (newest-cni-249978)   <name>newest-cni-249978</name>
	I0817 22:43:29.869706  260271 main.go:141] libmachine: (newest-cni-249978)   <memory unit='MiB'>2200</memory>
	I0817 22:43:29.869715  260271 main.go:141] libmachine: (newest-cni-249978)   <vcpu>2</vcpu>
	I0817 22:43:29.869723  260271 main.go:141] libmachine: (newest-cni-249978)   <features>
	I0817 22:43:29.869749  260271 main.go:141] libmachine: (newest-cni-249978)     <acpi/>
	I0817 22:43:29.869774  260271 main.go:141] libmachine: (newest-cni-249978)     <apic/>
	I0817 22:43:29.869786  260271 main.go:141] libmachine: (newest-cni-249978)     <pae/>
	I0817 22:43:29.869800  260271 main.go:141] libmachine: (newest-cni-249978)     
	I0817 22:43:29.869813  260271 main.go:141] libmachine: (newest-cni-249978)   </features>
	I0817 22:43:29.869826  260271 main.go:141] libmachine: (newest-cni-249978)   <cpu mode='host-passthrough'>
	I0817 22:43:29.869851  260271 main.go:141] libmachine: (newest-cni-249978)   
	I0817 22:43:29.869863  260271 main.go:141] libmachine: (newest-cni-249978)   </cpu>
	I0817 22:43:29.869892  260271 main.go:141] libmachine: (newest-cni-249978)   <os>
	I0817 22:43:29.869917  260271 main.go:141] libmachine: (newest-cni-249978)     <type>hvm</type>
	I0817 22:43:29.869942  260271 main.go:141] libmachine: (newest-cni-249978)     <boot dev='cdrom'/>
	I0817 22:43:29.869954  260271 main.go:141] libmachine: (newest-cni-249978)     <boot dev='hd'/>
	I0817 22:43:29.869967  260271 main.go:141] libmachine: (newest-cni-249978)     <bootmenu enable='no'/>
	I0817 22:43:29.869974  260271 main.go:141] libmachine: (newest-cni-249978)   </os>
	I0817 22:43:29.869981  260271 main.go:141] libmachine: (newest-cni-249978)   <devices>
	I0817 22:43:29.869990  260271 main.go:141] libmachine: (newest-cni-249978)     <disk type='file' device='cdrom'>
	I0817 22:43:29.870022  260271 main.go:141] libmachine: (newest-cni-249978)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/boot2docker.iso'/>
	I0817 22:43:29.870043  260271 main.go:141] libmachine: (newest-cni-249978)       <target dev='hdc' bus='scsi'/>
	I0817 22:43:29.870065  260271 main.go:141] libmachine: (newest-cni-249978)       <readonly/>
	I0817 22:43:29.870080  260271 main.go:141] libmachine: (newest-cni-249978)     </disk>
	I0817 22:43:29.870095  260271 main.go:141] libmachine: (newest-cni-249978)     <disk type='file' device='disk'>
	I0817 22:43:29.870115  260271 main.go:141] libmachine: (newest-cni-249978)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0817 22:43:29.870137  260271 main.go:141] libmachine: (newest-cni-249978)       <source file='/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/newest-cni-249978.rawdisk'/>
	I0817 22:43:29.870155  260271 main.go:141] libmachine: (newest-cni-249978)       <target dev='hda' bus='virtio'/>
	I0817 22:43:29.870170  260271 main.go:141] libmachine: (newest-cni-249978)     </disk>
	I0817 22:43:29.870182  260271 main.go:141] libmachine: (newest-cni-249978)     <interface type='network'>
	I0817 22:43:29.870194  260271 main.go:141] libmachine: (newest-cni-249978)       <source network='mk-newest-cni-249978'/>
	I0817 22:43:29.870208  260271 main.go:141] libmachine: (newest-cni-249978)       <model type='virtio'/>
	I0817 22:43:29.870234  260271 main.go:141] libmachine: (newest-cni-249978)     </interface>
	I0817 22:43:29.870261  260271 main.go:141] libmachine: (newest-cni-249978)     <interface type='network'>
	I0817 22:43:29.870275  260271 main.go:141] libmachine: (newest-cni-249978)       <source network='default'/>
	I0817 22:43:29.870288  260271 main.go:141] libmachine: (newest-cni-249978)       <model type='virtio'/>
	I0817 22:43:29.870300  260271 main.go:141] libmachine: (newest-cni-249978)     </interface>
	I0817 22:43:29.870310  260271 main.go:141] libmachine: (newest-cni-249978)     <serial type='pty'>
	I0817 22:43:29.870329  260271 main.go:141] libmachine: (newest-cni-249978)       <target port='0'/>
	I0817 22:43:29.870343  260271 main.go:141] libmachine: (newest-cni-249978)     </serial>
	I0817 22:43:29.870361  260271 main.go:141] libmachine: (newest-cni-249978)     <console type='pty'>
	I0817 22:43:29.870376  260271 main.go:141] libmachine: (newest-cni-249978)       <target type='serial' port='0'/>
	I0817 22:43:29.870391  260271 main.go:141] libmachine: (newest-cni-249978)     </console>
	I0817 22:43:29.870404  260271 main.go:141] libmachine: (newest-cni-249978)     <rng model='virtio'>
	I0817 22:43:29.870417  260271 main.go:141] libmachine: (newest-cni-249978)       <backend model='random'>/dev/random</backend>
	I0817 22:43:29.870430  260271 main.go:141] libmachine: (newest-cni-249978)     </rng>
	I0817 22:43:29.870438  260271 main.go:141] libmachine: (newest-cni-249978)     
	I0817 22:43:29.870450  260271 main.go:141] libmachine: (newest-cni-249978)     
	I0817 22:43:29.870465  260271 main.go:141] libmachine: (newest-cni-249978)   </devices>
	I0817 22:43:29.870477  260271 main.go:141] libmachine: (newest-cni-249978) </domain>
	I0817 22:43:29.870488  260271 main.go:141] libmachine: (newest-cni-249978) 
	I0817 22:43:29.874928  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:5f:3e:e5 in network default
	I0817 22:43:29.875566  260271 main.go:141] libmachine: (newest-cni-249978) Ensuring networks are active...
	I0817 22:43:29.875592  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:29.876353  260271 main.go:141] libmachine: (newest-cni-249978) Ensuring network default is active
	I0817 22:43:29.876655  260271 main.go:141] libmachine: (newest-cni-249978) Ensuring network mk-newest-cni-249978 is active
	I0817 22:43:29.877194  260271 main.go:141] libmachine: (newest-cni-249978) Getting domain xml...
	I0817 22:43:29.878009  260271 main.go:141] libmachine: (newest-cni-249978) Creating domain...
	I0817 22:43:31.212501  260271 main.go:141] libmachine: (newest-cni-249978) Waiting to get IP...
	I0817 22:43:31.213357  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:31.213745  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:31.213838  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:31.213751  260294 retry.go:31] will retry after 289.467889ms: waiting for machine to come up
	I0817 22:43:31.505312  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:31.505822  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:31.505857  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:31.505746  260294 retry.go:31] will retry after 333.465614ms: waiting for machine to come up
	I0817 22:43:31.841229  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:31.841736  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:31.841767  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:31.841686  260294 retry.go:31] will retry after 447.598263ms: waiting for machine to come up
	I0817 22:43:32.291539  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:32.292025  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:32.292049  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:32.291979  260294 retry.go:31] will retry after 421.026105ms: waiting for machine to come up
	I0817 22:43:32.714435  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:32.714954  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:32.714981  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:32.714898  260294 retry.go:31] will retry after 504.711849ms: waiting for machine to come up
	I0817 22:43:33.221811  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:33.222377  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:33.222409  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:33.222316  260294 retry.go:31] will retry after 757.678315ms: waiting for machine to come up
	I0817 22:43:33.981286  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:33.981768  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:33.981791  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:33.981711  260294 retry.go:31] will retry after 732.194593ms: waiting for machine to come up
	I0817 22:43:34.715370  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:34.715839  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:34.715871  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:34.715777  260294 retry.go:31] will retry after 1.279316788s: waiting for machine to come up
	I0817 22:43:35.997326  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:35.997763  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:35.997793  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:35.997686  260294 retry.go:31] will retry after 1.450530357s: waiting for machine to come up
	I0817 22:43:37.450395  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:37.450961  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:37.450990  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:37.450906  260294 retry.go:31] will retry after 2.023107358s: waiting for machine to come up
	I0817 22:43:39.475595  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:39.476154  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:39.476193  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:39.476085  260294 retry.go:31] will retry after 2.673588303s: waiting for machine to come up
	I0817 22:43:42.151879  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:42.152336  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:42.152368  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:42.152276  260294 retry.go:31] will retry after 2.476865949s: waiting for machine to come up
	I0817 22:43:44.631056  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:44.631508  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:44.631533  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:44.631472  260294 retry.go:31] will retry after 3.343220591s: waiting for machine to come up
	I0817 22:43:47.977264  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:47.977803  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:43:47.977828  260271 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:43:47.977763  260294 retry.go:31] will retry after 3.798484552s: waiting for machine to come up
	I0817 22:43:51.777833  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:51.778484  260271 main.go:141] libmachine: (newest-cni-249978) Found IP for machine: 192.168.72.79
	I0817 22:43:51.778500  260271 main.go:141] libmachine: (newest-cni-249978) Reserving static IP address...
	I0817 22:43:51.778511  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has current primary IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:51.778835  260271 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find host DHCP lease matching {name: "newest-cni-249978", mac: "52:54:00:88:0c:ac", ip: "192.168.72.79"} in network mk-newest-cni-249978
	I0817 22:43:51.868847  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Getting to WaitForSSH function...
	I0817 22:43:51.868881  260271 main.go:141] libmachine: (newest-cni-249978) Reserved static IP address: 192.168.72.79
	I0817 22:43:51.868918  260271 main.go:141] libmachine: (newest-cni-249978) Waiting for SSH to be available...
	I0817 22:43:51.872353  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:51.872782  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:51.872815  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:51.873025  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Using SSH client type: external
	I0817 22:43:51.873051  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa (-rw-------)
	I0817 22:43:51.873094  260271 main.go:141] libmachine: (newest-cni-249978) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:43:51.873113  260271 main.go:141] libmachine: (newest-cni-249978) DBG | About to run SSH command:
	I0817 22:43:51.873136  260271 main.go:141] libmachine: (newest-cni-249978) DBG | exit 0
	I0817 22:43:51.970268  260271 main.go:141] libmachine: (newest-cni-249978) DBG | SSH cmd err, output: <nil>: 
	I0817 22:43:51.970572  260271 main.go:141] libmachine: (newest-cni-249978) KVM machine creation complete!
	I0817 22:43:51.970965  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetConfigRaw
	I0817 22:43:51.971577  260271 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:43:51.971820  260271 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:43:51.972012  260271 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0817 22:43:51.972028  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetState
	I0817 22:43:51.973748  260271 main.go:141] libmachine: Detecting operating system of created instance...
	I0817 22:43:51.973769  260271 main.go:141] libmachine: Waiting for SSH to be available...
	I0817 22:43:51.973779  260271 main.go:141] libmachine: Getting to WaitForSSH function...
	I0817 22:43:51.973790  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:51.976149  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:51.976514  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:51.976556  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:51.976671  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:51.976894  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:51.977088  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:51.977264  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:51.977451  260271 main.go:141] libmachine: Using SSH client type: native
	I0817 22:43:51.977933  260271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:43:51.977947  260271 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0817 22:43:52.105508  260271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:43:52.105540  260271 main.go:141] libmachine: Detecting the provisioner...
	I0817 22:43:52.105554  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:52.108587  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.109024  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:52.109059  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.109211  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:52.109412  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:52.109642  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:52.109876  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:52.110104  260271 main.go:141] libmachine: Using SSH client type: native
	I0817 22:43:52.110506  260271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:43:52.110518  260271 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0817 22:43:52.242865  260271 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0817 22:43:52.243034  260271 main.go:141] libmachine: found compatible host: buildroot
	I0817 22:43:52.243052  260271 main.go:141] libmachine: Provisioning with buildroot...
	I0817 22:43:52.243068  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:43:52.243378  260271 buildroot.go:166] provisioning hostname "newest-cni-249978"
	I0817 22:43:52.243411  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:43:52.243609  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:52.246780  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.247242  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:52.247278  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.247558  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:52.247805  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:52.248026  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:52.248211  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:52.248417  260271 main.go:141] libmachine: Using SSH client type: native
	I0817 22:43:52.248870  260271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:43:52.248885  260271 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-249978 && echo "newest-cni-249978" | sudo tee /etc/hostname
	I0817 22:43:52.393020  260271 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-249978
	
	I0817 22:43:52.393058  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:52.395853  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.396295  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:52.396332  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.396548  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:52.396768  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:52.396979  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:52.397157  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:52.397350  260271 main.go:141] libmachine: Using SSH client type: native
	I0817 22:43:52.397853  260271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:43:52.397875  260271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-249978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-249978/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-249978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:43:52.536619  260271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:43:52.536656  260271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:43:52.536712  260271 buildroot.go:174] setting up certificates
	I0817 22:43:52.536730  260271 provision.go:83] configureAuth start
	I0817 22:43:52.536749  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:43:52.537096  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:43:52.539973  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.540342  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:52.540386  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.540469  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:52.542718  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.543127  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:52.543176  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.543433  260271 provision.go:138] copyHostCerts
	I0817 22:43:52.543510  260271 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:43:52.543524  260271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:43:52.543609  260271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:43:52.543721  260271 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:43:52.543733  260271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:43:52.543762  260271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:43:52.543820  260271 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:43:52.543827  260271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:43:52.543849  260271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:43:52.543894  260271 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.newest-cni-249978 san=[192.168.72.79 192.168.72.79 localhost 127.0.0.1 minikube newest-cni-249978]
	I0817 22:43:52.642744  260271 provision.go:172] copyRemoteCerts
	I0817 22:43:52.642803  260271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:43:52.642834  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:52.645463  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.645841  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:52.645875  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.646026  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:52.646279  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:52.646460  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:52.646587  260271 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:43:52.746076  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:43:52.772227  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:43:52.796085  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 22:43:52.823029  260271 provision.go:86] duration metric: configureAuth took 286.282335ms
	I0817 22:43:52.823060  260271 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:43:52.823286  260271 config.go:182] Loaded profile config "newest-cni-249978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:43:52.823395  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:52.826508  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.827000  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:52.827035  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:52.827367  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:52.827607  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:52.827819  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:52.827975  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:52.828165  260271 main.go:141] libmachine: Using SSH client type: native
	I0817 22:43:52.828650  260271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:43:52.828672  260271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:43:53.157406  260271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:43:53.157457  260271 main.go:141] libmachine: Checking connection to Docker...
	I0817 22:43:53.157471  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetURL
	I0817 22:43:53.158844  260271 main.go:141] libmachine: (newest-cni-249978) DBG | Using libvirt version 6000000
	I0817 22:43:53.161147  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.161522  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:53.161557  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.161717  260271 main.go:141] libmachine: Docker is up and running!
	I0817 22:43:53.161738  260271 main.go:141] libmachine: Reticulating splines...
	I0817 22:43:53.161747  260271 client.go:171] LocalClient.Create took 23.83115151s
	I0817 22:43:53.161772  260271 start.go:167] duration metric: libmachine.API.Create for "newest-cni-249978" took 23.83121069s
	I0817 22:43:53.161784  260271 start.go:300] post-start starting for "newest-cni-249978" (driver="kvm2")
	I0817 22:43:53.161799  260271 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:43:53.161839  260271 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:43:53.162110  260271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:43:53.162138  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:53.164615  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.164962  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:53.164994  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.165146  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:53.165360  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:53.165549  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:53.165720  260271 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:43:53.261618  260271 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:43:53.266934  260271 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:43:53.266975  260271 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:43:53.267086  260271 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:43:53.267196  260271 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:43:53.267328  260271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:43:53.278708  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:43:53.302950  260271 start.go:303] post-start completed in 141.146537ms
	I0817 22:43:53.303030  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetConfigRaw
	I0817 22:43:53.303765  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:43:53.307141  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.307637  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:53.307683  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.308054  260271 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/config.json ...
	I0817 22:43:53.308332  260271 start.go:128] duration metric: createHost completed in 23.998422111s
	I0817 22:43:53.308398  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:53.310816  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.311179  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:53.311222  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.311321  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:53.311550  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:53.311712  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:53.311877  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:53.312062  260271 main.go:141] libmachine: Using SSH client type: native
	I0817 22:43:53.312453  260271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:43:53.312466  260271 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:43:53.438952  260271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692312233.418550211
	
	I0817 22:43:53.438981  260271 fix.go:206] guest clock: 1692312233.418550211
	I0817 22:43:53.438992  260271 fix.go:219] Guest: 2023-08-17 22:43:53.418550211 +0000 UTC Remote: 2023-08-17 22:43:53.308351361 +0000 UTC m=+24.122080129 (delta=110.19885ms)
	I0817 22:43:53.439018  260271 fix.go:190] guest clock delta is within tolerance: 110.19885ms
	I0817 22:43:53.439025  260271 start.go:83] releasing machines lock for "newest-cni-249978", held for 24.129212766s
	I0817 22:43:53.439066  260271 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:43:53.439389  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:43:53.442185  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.442583  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:53.442611  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.442781  260271 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:43:53.443327  260271 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:43:53.443475  260271 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:43:53.443582  260271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:43:53.443625  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:53.443645  260271 ssh_runner.go:195] Run: cat /version.json
	I0817 22:43:53.443667  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:43:53.446580  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.446609  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.447109  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:53.447146  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:53.447168  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.447186  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:53.447343  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:53.447349  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:43:53.447609  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:53.447609  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:43:53.447771  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:53.447780  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:43:53.447935  260271 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:43:53.447966  260271 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:43:53.566338  260271 ssh_runner.go:195] Run: systemctl --version
	I0817 22:43:53.574157  260271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:43:53.743341  260271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:43:53.749741  260271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:43:53.749832  260271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:43:53.766503  260271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:43:53.766533  260271 start.go:466] detecting cgroup driver to use...
	I0817 22:43:53.766640  260271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:43:53.780846  260271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:43:53.793672  260271 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:43:53.793742  260271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:43:53.808307  260271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:43:53.822701  260271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:43:53.937075  260271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:43:54.062932  260271 docker.go:212] disabling docker service ...
	I0817 22:43:54.063050  260271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:43:54.078573  260271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:43:54.091991  260271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:43:54.206287  260271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:43:54.326733  260271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:43:54.342024  260271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:43:54.360766  260271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:43:54.360846  260271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:43:54.373757  260271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:43:54.373853  260271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:43:54.386483  260271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:43:54.398301  260271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:43:54.411437  260271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:43:54.424128  260271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:43:54.435779  260271 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:43:54.435865  260271 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:43:54.452927  260271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:43:54.462895  260271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:43:54.582588  260271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:43:54.763481  260271 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:43:54.763558  260271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:43:54.769089  260271 start.go:534] Will wait 60s for crictl version
	I0817 22:43:54.769237  260271 ssh_runner.go:195] Run: which crictl
	I0817 22:43:54.773964  260271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:43:54.816027  260271 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:43:54.816123  260271 ssh_runner.go:195] Run: crio --version
	I0817 22:43:54.867180  260271 ssh_runner.go:195] Run: crio --version
	I0817 22:43:54.922950  260271 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.1 ...
	I0817 22:43:54.924500  260271 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:43:54.927529  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:54.927886  260271 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:43:45 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:43:54.927917  260271 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:43:54.928204  260271 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0817 22:43:54.933126  260271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:43:54.946834  260271 localpath.go:92] copying /home/jenkins/minikube-integration/16865-203458/.minikube/client.crt -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/client.crt
	I0817 22:43:54.947030  260271 localpath.go:117] copying /home/jenkins/minikube-integration/16865-203458/.minikube/client.key -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/client.key
	I0817 22:43:54.949219  260271 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0817 22:43:54.950921  260271 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:43:54.951004  260271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:43:54.979747  260271 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0817 22:43:54.979826  260271 ssh_runner.go:195] Run: which lz4
	I0817 22:43:54.984274  260271 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:43:54.989122  260271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:43:54.989159  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457054966 bytes)
	I0817 22:43:56.904251  260271 crio.go:444] Took 1.920003 seconds to copy over tarball
	I0817 22:43:56.904354  260271 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:44:00.007280  260271 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102880416s)
	I0817 22:44:00.007335  260271 crio.go:451] Took 3.103052 seconds to extract the tarball
	I0817 22:44:00.007348  260271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:44:00.057180  260271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:44:00.132896  260271 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:44:00.132933  260271 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:44:00.133020  260271 ssh_runner.go:195] Run: crio config
	I0817 22:44:00.210601  260271 cni.go:84] Creating CNI manager for ""
	I0817 22:44:00.210628  260271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:44:00.210654  260271 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0817 22:44:00.210675  260271 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.79 APIServerPort:8443 KubernetesVersion:v1.28.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-249978 NodeName:newest-cni-249978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.72.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:44:00.210857  260271 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-249978"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:44:00.210942  260271 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-249978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-249978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:44:00.211019  260271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0-rc.1
	I0817 22:44:00.223503  260271 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:44:00.223598  260271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:44:00.236600  260271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (418 bytes)
	I0817 22:44:00.256792  260271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 22:44:00.277513  260271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0817 22:44:00.297568  260271 ssh_runner.go:195] Run: grep 192.168.72.79	control-plane.minikube.internal$ /etc/hosts
	I0817 22:44:00.302305  260271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.79	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:44:00.315647  260271 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978 for IP: 192.168.72.79
	I0817 22:44:00.315696  260271 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:44:00.315935  260271 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:44:00.315976  260271 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:44:00.316054  260271 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/client.key
	I0817 22:44:00.316077  260271 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key.7538f06f
	I0817 22:44:00.316090  260271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.crt.7538f06f with IP's: [192.168.72.79 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 22:44:00.406334  260271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.crt.7538f06f ...
	I0817 22:44:00.406370  260271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.crt.7538f06f: {Name:mkd619fb46d564be8fe3d58b512d0aa9fa41b044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:44:00.406550  260271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key.7538f06f ...
	I0817 22:44:00.406562  260271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key.7538f06f: {Name:mk467aa7c342b55b55e142ca313cfa64135ed9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:44:00.406633  260271 certs.go:337] copying /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.crt.7538f06f -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.crt
	I0817 22:44:00.406691  260271 certs.go:341] copying /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key.7538f06f -> /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key
	I0817 22:44:00.406888  260271 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.key
	I0817 22:44:00.406931  260271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.crt with IP's: []
	I0817 22:44:00.552131  260271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.crt ...
	I0817 22:44:00.552166  260271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.crt: {Name:mk675cb54a9c165debbcf0ebf6e711050bfdbcb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:44:00.552342  260271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.key ...
	I0817 22:44:00.552355  260271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.key: {Name:mkd751b5820ce14b712d191278991f062a8edf5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:44:00.552531  260271 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:44:00.552570  260271 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:44:00.552592  260271 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:44:00.552622  260271 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:44:00.552687  260271 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:44:00.552717  260271 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:44:00.552793  260271 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:44:00.553620  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:44:00.585158  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:44:00.614029  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:44:00.644754  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:44:00.672243  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:44:00.702100  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:44:00.730280  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:44:00.760002  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:44:00.787745  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:44:00.815408  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:44:00.840935  260271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:44:00.867223  260271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:44:00.886034  260271 ssh_runner.go:195] Run: openssl version
	I0817 22:44:00.892643  260271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:44:00.904834  260271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:44:00.911349  260271 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:44:00.911437  260271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:44:00.917768  260271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:44:00.929689  260271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:44:00.941997  260271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:44:00.948008  260271 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:44:00.948094  260271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:44:00.954604  260271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:44:00.968337  260271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:44:00.981831  260271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:44:00.987647  260271 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:44:00.987723  260271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:44:00.994038  260271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:44:01.005911  260271 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:44:01.012088  260271 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 22:44:01.012158  260271 kubeadm.go:404] StartCluster: {Name:newest-cni-249978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-249
978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:44:01.012284  260271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:44:01.012348  260271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:44:01.049784  260271 cri.go:89] found id: ""
	I0817 22:44:01.049886  260271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:44:01.060614  260271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:44:01.071816  260271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:44:01.082616  260271 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:44:01.082674  260271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:44:01.474998  260271 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:44:14.225304  260271 kubeadm.go:322] [init] Using Kubernetes version: v1.28.0-rc.1
	I0817 22:44:14.225510  260271 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:44:14.225595  260271 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:44:14.225738  260271 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:44:14.225877  260271 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:44:14.225962  260271 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:44:14.227962  260271 out.go:204]   - Generating certificates and keys ...
	I0817 22:44:14.228069  260271 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:44:14.228162  260271 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:44:14.228259  260271 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 22:44:14.228330  260271 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0817 22:44:14.228419  260271 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0817 22:44:14.228493  260271 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0817 22:44:14.228592  260271 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0817 22:44:14.228751  260271 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-249978] and IPs [192.168.72.79 127.0.0.1 ::1]
	I0817 22:44:14.228850  260271 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0817 22:44:14.229053  260271 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-249978] and IPs [192.168.72.79 127.0.0.1 ::1]
	I0817 22:44:14.229149  260271 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 22:44:14.229205  260271 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 22:44:14.229246  260271 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0817 22:44:14.229292  260271 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:44:14.229341  260271 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:44:14.229413  260271 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:44:14.229511  260271 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:44:14.229601  260271 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:44:14.229708  260271 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:44:14.229800  260271 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:44:14.231460  260271 out.go:204]   - Booting up control plane ...
	I0817 22:44:14.231582  260271 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:44:14.231694  260271 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:44:14.231761  260271 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:44:14.231881  260271 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:44:14.231992  260271 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:44:14.232049  260271 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:44:14.232257  260271 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:44:14.232372  260271 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004457 seconds
	I0817 22:44:14.232489  260271 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:44:14.232601  260271 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:44:14.232668  260271 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:44:14.232877  260271 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-249978 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:44:14.232959  260271 kubeadm.go:322] [bootstrap-token] Using token: tub3ld.uld7zpwmfy5g9n2b
	I0817 22:44:14.235878  260271 out.go:204]   - Configuring RBAC rules ...
	I0817 22:44:14.236033  260271 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:44:14.236141  260271 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:44:14.236347  260271 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:44:14.236536  260271 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:44:14.236710  260271 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:44:14.236834  260271 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:44:14.236995  260271 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:44:14.237071  260271 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:44:14.237162  260271 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:44:14.237176  260271 kubeadm.go:322] 
	I0817 22:44:14.237256  260271 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:44:14.237270  260271 kubeadm.go:322] 
	I0817 22:44:14.237410  260271 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:44:14.237421  260271 kubeadm.go:322] 
	I0817 22:44:14.237453  260271 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:44:14.237532  260271 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:44:14.237588  260271 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:44:14.237594  260271 kubeadm.go:322] 
	I0817 22:44:14.237640  260271 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:44:14.237664  260271 kubeadm.go:322] 
	I0817 22:44:14.237757  260271 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:44:14.237766  260271 kubeadm.go:322] 
	I0817 22:44:14.237840  260271 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:44:14.237935  260271 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:44:14.238029  260271 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:44:14.238041  260271 kubeadm.go:322] 
	I0817 22:44:14.238161  260271 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:44:14.238262  260271 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:44:14.238273  260271 kubeadm.go:322] 
	I0817 22:44:14.238377  260271 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tub3ld.uld7zpwmfy5g9n2b \
	I0817 22:44:14.238521  260271 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:44:14.238554  260271 kubeadm.go:322] 	--control-plane 
	I0817 22:44:14.238560  260271 kubeadm.go:322] 
	I0817 22:44:14.238660  260271 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:44:14.238667  260271 kubeadm.go:322] 
	I0817 22:44:14.238804  260271 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tub3ld.uld7zpwmfy5g9n2b \
	I0817 22:44:14.238960  260271 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:44:14.238976  260271 cni.go:84] Creating CNI manager for ""
	I0817 22:44:14.238986  260271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:44:14.240916  260271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 22:24:06 UTC, ends at Thu 2023-08-17 22:44:16 UTC. --
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.801337537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a6f09f6c-5bad-4394-9507-c22ffc03c33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.824559786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8e2b9a2b-244d-4bbb-b2a1-23371a851eea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.824672100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8e2b9a2b-244d-4bbb-b2a1-23371a851eea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.824930967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8e2b9a2b-244d-4bbb-b2a1-23371a851eea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.864988815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=026981a9-8d75-4ea4-825f-7fad505fe294 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.865056006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=026981a9-8d75-4ea4-825f-7fad505fe294 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.865263462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=026981a9-8d75-4ea4-825f-7fad505fe294 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.904430700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5013e51d-6053-4f42-bf9a-f09286b4d747 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.904548319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5013e51d-6053-4f42-bf9a-f09286b4d747 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.904832146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5013e51d-6053-4f42-bf9a-f09286b4d747 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.943191117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b9c7b5e6-197e-4983-b26d-a9b13565cfd0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.943280347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b9c7b5e6-197e-4983-b26d-a9b13565cfd0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.943555487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b9c7b5e6-197e-4983-b26d-a9b13565cfd0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.986503998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=04972146-ac75-49b9-8499-2faed4a975d1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.986664376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=04972146-ac75-49b9-8499-2faed4a975d1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:15 no-preload-525875 crio[732]: time="2023-08-17 22:44:15.987006128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=04972146-ac75-49b9-8499-2faed4a975d1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:16 no-preload-525875 crio[732]: time="2023-08-17 22:44:16.029296618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5e86159f-a5ce-4e98-b605-aed1a20cc939 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:16 no-preload-525875 crio[732]: time="2023-08-17 22:44:16.029360438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5e86159f-a5ce-4e98-b605-aed1a20cc939 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:16 no-preload-525875 crio[732]: time="2023-08-17 22:44:16.029759449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5e86159f-a5ce-4e98-b605-aed1a20cc939 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:16 no-preload-525875 crio[732]: time="2023-08-17 22:44:16.070203387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9b4a23b4-9a68-4b98-bc87-c197b266f3e2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:16 no-preload-525875 crio[732]: time="2023-08-17 22:44:16.070276123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9b4a23b4-9a68-4b98-bc87-c197b266f3e2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:16 no-preload-525875 crio[732]: time="2023-08-17 22:44:16.070608839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9b4a23b4-9a68-4b98-bc87-c197b266f3e2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:16 no-preload-525875 crio[732]: time="2023-08-17 22:44:16.103715169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1cb6be7a-46d6-4515-950c-25c93a1b790d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:16 no-preload-525875 crio[732]: time="2023-08-17 22:44:16.103787226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1cb6be7a-46d6-4515-950c-25c93a1b790d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:44:16 no-preload-525875 crio[732]: time="2023-08-17 22:44:16.103983284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1692311125523469419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bdd65247a55bf396dc194a9699eaa0e27f92fe88b29aa0eedb83a1986fbd10,PodSandboxId:ceacda7783d0e682f8e39dec9ed11c58e2bb28a51c90b2b6b33b6c1a5a72edf0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1692311102888746138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 120471b2-fc06-44fc-b89c-bdaa40d7bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f7bfc2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18,PodSandboxId:6091987ad77f3afc72377602e492327f83740585a4e01449c342c6a3291a3364,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1692311101261176759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b54g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fad219a-90a1-4ec1-b6fe-12632c5f1913,},Annotations:map[string]string{io.kubernetes.container.hash: 27f31efb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d,PodSandboxId:2b4d6e984e3d60cdaad2da8c5512a08631cef2cf1db629f687ec5bbe17c1941a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1692311094873165239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f18e7ab1-0b36-4439-9282-fbc4bf804abc,},Annotations:map[string]string{io.kubernetes.container.hash: ccaaeba9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405,PodSandboxId:7eff5cea7a2d56309c971c799ea6dabd8cd1cb2810792766b34d0a9fff8fdcb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5d5db8af0e90b726d14a4de78a6d357d418fa30f85757c277f7f6006e8be2b32,State:CONTAINER_RUNNING,CreatedAt:1692311094772482412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pzpk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4373b29e-6b1
1-4c28-bbb4-3d97d2151565,},Annotations:map[string]string{io.kubernetes.container.hash: 3a00bb24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17,PodSandboxId:9ff32d7ae3574b4c0feb7cba02b325b46349b9fbd582a8a3a3d0190d6fc12e1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:70b5a094f707b194a352b95a6a4f7e575d5f1e5aa8286e92f9182d666e22c964,State:CONTAINER_RUNNING,CreatedAt:1692311087981651350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94776b0de805dc2eb27
4a1ccba3664d8,},Annotations:map[string]string{io.kubernetes.container.hash: cc13e931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992,PodSandboxId:3888db714dfc1a339ff830cf67e61ccae9920677ca4dc2f62d66239a67322b31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1692311087811510891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7d2eeeb2d0a33c6a24769d16540e4a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5a3277c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5,PodSandboxId:256a46760aba17e6f831aa558adbd8f4ec80ef78d5eb550f57184a95d1cb5261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:971942750dfef848fc5b4ff9dc226e67270586e1926abec344092bb3f23f1b43,State:CONTAINER_RUNNING,CreatedAt:1692311087340688687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960faaac9ae3a0d7825b7493a9c82b6f,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 71d0a9eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb,PodSandboxId:77902597faf871b5b293cfefda657bd1012458bd13c81e0d4e88761e4d232300,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:dd6f06ffed926596cd089b691bb9f5ef500e7b3c790a12436e87a2969d51f943,State:CONTAINER_RUNNING,CreatedAt:1692311087266033381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-525875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36a07c887c961b04c1a6eb6f19354fe,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 489f2a07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1cb6be7a-46d6-4515-950c-25c93a1b790d name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	5e92f33147487       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   2b4d6e984e3d6
	68bdd65247a55       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   ceacda7783d0e
	4b2d6d0a0e671       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      19 minutes ago      Running             coredns                   1                   6091987ad77f3
	659e02540293f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   2b4d6e984e3d6
	d5071416ecfc1       cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8                                      19 minutes ago      Running             kube-proxy                1                   7eff5cea7a2d5
	291d84856ee9a       046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd                                      19 minutes ago      Running             kube-scheduler            1                   9ff32d7ae3574
	07f7152c064dc       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      19 minutes ago      Running             etcd                      1                   3888db714dfc1
	c3d45374a533d       2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d                                      19 minutes ago      Running             kube-apiserver            1                   256a46760aba1
	8ecbcee30abd9       e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef                                      19 minutes ago      Running             kube-controller-manager   1                   77902597faf87
	
	* 
	* ==> coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55806 - 6050 "HINFO IN 8937173382687744230.8945713894719446716. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014558274s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-525875
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-525875
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=no-preload-525875
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T22_15_44_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 22:15:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-525875
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 22:44:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 22:40:43 +0000   Thu, 17 Aug 2023 22:15:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 22:40:43 +0000   Thu, 17 Aug 2023 22:15:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 22:40:43 +0000   Thu, 17 Aug 2023 22:15:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 22:40:43 +0000   Thu, 17 Aug 2023 22:25:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.196
	  Hostname:    no-preload-525875
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e776e3b16f4c4807aa6ba95a93d58c39
	  System UUID:                e776e3b1-6f4c-4807-aa6b-a95a93d58c39
	  Boot ID:                    48f01ae5-f920-4505-b883-dc0cc5dc6b19
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.0-rc.1
	  Kube-Proxy Version:         v1.28.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5dd5756b68-b54g4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-525875                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-525875             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-525875    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-pzpk2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-525875             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-25p7z              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-525875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-525875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-525875 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-525875 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-525875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-525875 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-525875 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-525875 event: Registered Node no-preload-525875 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-525875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-525875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-525875 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-525875 event: Registered Node no-preload-525875 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug17 22:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071934] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug17 22:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.427864] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140110] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.504956] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.040204] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.110690] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.152209] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.128676] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[  +0.223428] systemd-fstab-generator[717]: Ignoring "noauto" for root device
	[ +30.772289] systemd-fstab-generator[1232]: Ignoring "noauto" for root device
	[ +14.293746] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] <==
	* {"level":"warn","ts":"2023-08-17T22:25:06.156707Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:05.676844Z","time spent":"479.859729ms","remote":"127.0.0.1:39296","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4464,"request content":"key:\"/registry/minions/no-preload-525875\" "}
	{"level":"warn","ts":"2023-08-17T22:25:06.156816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"468.639033ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:25:06.15683Z","caller":"traceutil/trace.go:171","msg":"trace[840329666] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:625; }","duration":"468.651873ms","start":"2023-08-17T22:25:05.688173Z","end":"2023-08-17T22:25:06.156824Z","steps":["trace[840329666] 'agreement among raft nodes before linearized reading'  (duration: 468.627123ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:25:06.156841Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:05.688157Z","time spent":"468.681495ms","remote":"127.0.0.1:39258","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-08-17T22:25:29.422903Z","caller":"traceutil/trace.go:171","msg":"trace[1234259991] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"128.15293ms","start":"2023-08-17T22:25:29.294716Z","end":"2023-08-17T22:25:29.422868Z","steps":["trace[1234259991] 'process raft request'  (duration: 127.944397ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T22:25:29.77882Z","caller":"traceutil/trace.go:171","msg":"trace[638718583] linearizableReadLoop","detail":"{readStateIndex:691; appliedIndex:690; }","duration":"350.223706ms","start":"2023-08-17T22:25:29.428582Z","end":"2023-08-17T22:25:29.778806Z","steps":["trace[638718583] 'read index received'  (duration: 308.089994ms)","trace[638718583] 'applied index is now lower than readState.Index'  (duration: 42.132974ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T22:25:29.779007Z","caller":"traceutil/trace.go:171","msg":"trace[221553669] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"475.303542ms","start":"2023-08-17T22:25:29.303695Z","end":"2023-08-17T22:25:29.778998Z","steps":["trace[221553669] 'process raft request'  (duration: 433.066867ms)","trace[221553669] 'compare'  (duration: 41.769491ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T22:25:29.779136Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:29.303678Z","time spent":"475.385127ms","remote":"127.0.0.1:39298","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4055,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-25p7z\" mod_revision:631 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-25p7z\" value_size:3989 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-25p7z\" > >"}
	{"level":"warn","ts":"2023-08-17T22:25:29.779136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.7662ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" ","response":"range_response_count:1 size:782"}
	{"level":"info","ts":"2023-08-17T22:25:29.779461Z","caller":"traceutil/trace.go:171","msg":"trace[720060691] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577; range_end:; response_count:1; response_revision:644; }","duration":"351.093914ms","start":"2023-08-17T22:25:29.428356Z","end":"2023-08-17T22:25:29.77945Z","steps":["trace[720060691] 'agreement among raft nodes before linearized reading'  (duration: 350.749934ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:25:29.779528Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:29.428336Z","time spent":"351.182377ms","remote":"127.0.0.1:39274","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":805,"request content":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" "}
	{"level":"info","ts":"2023-08-17T22:25:30.248594Z","caller":"traceutil/trace.go:171","msg":"trace[635304970] linearizableReadLoop","detail":"{readStateIndex:692; appliedIndex:691; }","duration":"461.067955ms","start":"2023-08-17T22:25:29.787508Z","end":"2023-08-17T22:25:30.248576Z","steps":["trace[635304970] 'read index received'  (duration: 366.035648ms)","trace[635304970] 'applied index is now lower than readState.Index'  (duration: 95.03112ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T22:25:30.248723Z","caller":"traceutil/trace.go:171","msg":"trace[1843546038] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"464.56781ms","start":"2023-08-17T22:25:29.784138Z","end":"2023-08-17T22:25:30.248706Z","steps":["trace[1843546038] 'process raft request'  (duration: 369.46243ms)","trace[1843546038] 'compare'  (duration: 94.738361ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T22:25:30.248845Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:29.784121Z","time spent":"464.666813ms","remote":"127.0.0.1:39274","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":767,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" mod_revision:601 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" value_size:672 lease:4841239881108774533 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-25p7z.177c4bfa87acc577\" > >"}
	{"level":"warn","ts":"2023-08-17T22:25:30.248861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"461.382774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-525875\" ","response":"range_response_count:1 size:4693"}
	{"level":"info","ts":"2023-08-17T22:25:30.249069Z","caller":"traceutil/trace.go:171","msg":"trace[65102174] range","detail":"{range_begin:/registry/minions/no-preload-525875; range_end:; response_count:1; response_revision:645; }","duration":"461.601268ms","start":"2023-08-17T22:25:29.78746Z","end":"2023-08-17T22:25:30.249061Z","steps":["trace[65102174] 'agreement among raft nodes before linearized reading'  (duration: 461.263036ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:25:30.249134Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:25:29.78745Z","time spent":"461.672509ms","remote":"127.0.0.1:39296","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4716,"request content":"key:\"/registry/minions/no-preload-525875\" "}
	{"level":"info","ts":"2023-08-17T22:34:50.544578Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":867}
	{"level":"info","ts":"2023-08-17T22:34:50.54854Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":867,"took":"2.983304ms","hash":2385570467}
	{"level":"info","ts":"2023-08-17T22:34:50.548718Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2385570467,"revision":867,"compact-revision":-1}
	{"level":"info","ts":"2023-08-17T22:39:50.558767Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1110}
	{"level":"info","ts":"2023-08-17T22:39:50.56198Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1110,"took":"1.989758ms","hash":2433854632}
	{"level":"info","ts":"2023-08-17T22:39:50.562148Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2433854632,"revision":1110,"compact-revision":867}
	{"level":"warn","ts":"2023-08-17T22:44:00.349234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.792676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:44:00.349925Z","caller":"traceutil/trace.go:171","msg":"trace[238847710] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1555; }","duration":"114.603812ms","start":"2023-08-17T22:44:00.235284Z","end":"2023-08-17T22:44:00.349888Z","steps":["trace[238847710] 'count revisions from in-memory index tree'  (duration: 113.562578ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  22:44:16 up 20 min,  0 users,  load average: 0.01, 0.10, 0.14
	Linux no-preload-525875 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] <==
	* W0817 22:39:53.281091       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:39:53.281272       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:39:53.282677       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:40:52.093354       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.123.149:443: connect: connection refused
	I0817 22:40:52.093516       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:40:53.282021       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:40:53.282099       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:40:53.282106       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:40:53.283259       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:40:53.283358       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:40:53.283490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:41:52.093055       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.123.149:443: connect: connection refused
	I0817 22:41:52.093166       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0817 22:42:52.093803       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.123.149:443: connect: connection refused
	I0817 22:42:52.094076       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:42:53.282232       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:42:53.282339       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:42:53.282348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:42:53.283692       1 handler_proxy.go:93] no RequestInfo found in the context
	E0817 22:42:53.283743       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:42:53.283751       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:43:52.092655       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.123.149:443: connect: connection refused
	I0817 22:43:52.092854       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] <==
	* I0817 22:38:35.876585       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:39:05.292167       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:39:05.886818       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:39:35.298544       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:39:35.897065       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:40:05.307676       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:40:05.908302       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:40:35.313309       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:40:35.917938       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:41:05.320708       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:41:05.928325       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0817 22:41:14.306557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="340.313µs"
	I0817 22:41:26.303331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="170.115µs"
	E0817 22:41:35.328573       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:41:35.941870       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:42:05.335330       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:42:05.953116       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:42:35.344303       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:42:35.962057       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:43:05.350964       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:43:05.970538       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:43:35.357095       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:43:35.982788       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0817 22:44:05.363833       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0817 22:44:05.994695       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] <==
	* I0817 22:24:55.042851       1 server_others.go:69] "Using iptables proxy"
	I0817 22:24:55.071865       1 node.go:141] Successfully retrieved node IP: 192.168.61.196
	I0817 22:24:55.115855       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0817 22:24:55.115904       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0817 22:24:55.118787       1 server_others.go:152] "Using iptables Proxier"
	I0817 22:24:55.118858       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 22:24:55.119043       1 server.go:846] "Version info" version="v1.28.0-rc.1"
	I0817 22:24:55.119078       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:24:55.120057       1 config.go:188] "Starting service config controller"
	I0817 22:24:55.120115       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 22:24:55.120134       1 config.go:97] "Starting endpoint slice config controller"
	I0817 22:24:55.120138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 22:24:55.121008       1 config.go:315] "Starting node config controller"
	I0817 22:24:55.121046       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 22:24:55.220751       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 22:24:55.220806       1 shared_informer.go:318] Caches are synced for service config
	I0817 22:24:55.221144       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] <==
	* I0817 22:24:50.120999       1 serving.go:348] Generated self-signed cert in-memory
	W0817 22:24:52.143603       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 22:24:52.143886       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 22:24:52.144002       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 22:24:52.144111       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 22:24:52.286736       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0-rc.1"
	I0817 22:24:52.286792       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:24:52.290594       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0817 22:24:52.294497       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0817 22:24:52.294688       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 22:24:52.294730       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 22:24:52.395019       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 22:24:06 UTC, ends at Thu 2023-08-17 22:44:16 UTC. --
	Aug 17 22:41:37 no-preload-525875 kubelet[1238]: E0817 22:41:37.281935    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:41:46 no-preload-525875 kubelet[1238]: E0817 22:41:46.308782    1238 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:41:46 no-preload-525875 kubelet[1238]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:41:46 no-preload-525875 kubelet[1238]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:41:46 no-preload-525875 kubelet[1238]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 17 22:41:48 no-preload-525875 kubelet[1238]: E0817 22:41:48.281865    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:41:59 no-preload-525875 kubelet[1238]: E0817 22:41:59.281921    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:42:12 no-preload-525875 kubelet[1238]: E0817 22:42:12.284296    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:42:25 no-preload-525875 kubelet[1238]: E0817 22:42:25.282792    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:42:36 no-preload-525875 kubelet[1238]: E0817 22:42:36.283199    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:42:46 no-preload-525875 kubelet[1238]: E0817 22:42:46.310782    1238 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:42:46 no-preload-525875 kubelet[1238]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:42:46 no-preload-525875 kubelet[1238]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:42:46 no-preload-525875 kubelet[1238]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 17 22:42:51 no-preload-525875 kubelet[1238]: E0817 22:42:51.284005    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:43:06 no-preload-525875 kubelet[1238]: E0817 22:43:06.284083    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:43:20 no-preload-525875 kubelet[1238]: E0817 22:43:20.282961    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:43:34 no-preload-525875 kubelet[1238]: E0817 22:43:34.282309    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:43:46 no-preload-525875 kubelet[1238]: E0817 22:43:46.309039    1238 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:43:46 no-preload-525875 kubelet[1238]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:43:46 no-preload-525875 kubelet[1238]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:43:46 no-preload-525875 kubelet[1238]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 17 22:43:47 no-preload-525875 kubelet[1238]: E0817 22:43:47.283197    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:44:01 no-preload-525875 kubelet[1238]: E0817 22:44:01.283318    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	Aug 17 22:44:15 no-preload-525875 kubelet[1238]: E0817 22:44:15.283791    1238 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-25p7z" podUID="1069cee0-4d6e-4420-a3e5-c3ca300db03f"
	
	* 
	* ==> storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] <==
	* I0817 22:25:25.728191       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 22:25:25.749533       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 22:25:25.749644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 22:25:43.161255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 22:25:43.161961       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7841fb22-cdbf-45fb-a010-e0a54a3a2824", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-525875_785a7956-f4d2-4576-b4a5-4686072cc982 became leader
	I0817 22:25:43.162085       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-525875_785a7956-f4d2-4576-b4a5-4686072cc982!
	I0817 22:25:43.263172       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-525875_785a7956-f4d2-4576-b4a5-4686072cc982!
	
	* 
	* ==> storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] <==
	* I0817 22:24:55.051184       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0817 22:25:25.056168       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-525875 -n no-preload-525875
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-525875 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-25p7z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-525875 describe pod metrics-server-57f55c9bc5-25p7z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-525875 describe pod metrics-server-57f55c9bc5-25p7z: exit status 1 (72.528589ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-25p7z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-525875 describe pod metrics-server-57f55c9bc5-25p7z: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (332.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (352.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0817 22:39:35.283687  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-437183 -n embed-certs-437183
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-08-17 22:45:12.931655206 +0000 UTC m=+5686.572422938
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-437183 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-437183 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.749µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-437183 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-437183 -n embed-certs-437183
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-437183 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-437183 logs -n 25: (1.163265255s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-340676 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | disable-driver-mounts-340676                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:17 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-294781        | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-525875             | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-437183            | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-321287  | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-294781             | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-525875                  | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:29 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-437183                 | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-321287       | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC | 17 Aug 23 22:30 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:43 UTC | 17 Aug 23 22:43 UTC |
	| start   | -p newest-cni-249978 --memory=2200 --alsologtostderr   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:43 UTC | 17 Aug 23 22:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC | 17 Aug 23 22:44 UTC |
	| addons  | enable metrics-server -p newest-cni-249978             | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC | 17 Aug 23 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-249978                                   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC | 17 Aug 23 22:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-249978                  | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC | 17 Aug 23 22:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-249978 --memory=2200 --alsologtostderr   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 22:44:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 22:44:43.163827  260970 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:44:43.163980  260970 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:44:43.163989  260970 out.go:309] Setting ErrFile to fd 2...
	I0817 22:44:43.163994  260970 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:44:43.164196  260970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:44:43.164805  260970 out.go:303] Setting JSON to false
	I0817 22:44:43.165716  260970 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26808,"bootTime":1692285475,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:44:43.165779  260970 start.go:138] virtualization: kvm guest
	I0817 22:44:43.169063  260970 out.go:177] * [newest-cni-249978] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:44:43.171260  260970 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:44:43.171292  260970 notify.go:220] Checking for updates...
	I0817 22:44:43.174577  260970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:44:43.176301  260970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:44:43.179543  260970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:44:43.181158  260970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:44:43.182542  260970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:44:43.184414  260970 config.go:182] Loaded profile config "newest-cni-249978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:44:43.184851  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:44:43.184918  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:44:43.200017  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0817 22:44:43.200457  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:44:43.201092  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:44:43.201122  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:44:43.201494  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:44:43.201684  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:44:43.201984  260970 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:44:43.202330  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:44:43.202372  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:44:43.217738  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
	I0817 22:44:43.218257  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:44:43.218854  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:44:43.218876  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:44:43.219281  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:44:43.219503  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:44:43.258209  260970 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:44:43.259733  260970 start.go:298] selected driver: kvm2
	I0817 22:44:43.259745  260970 start.go:902] validating driver "kvm2" against &{Name:newest-cni-249978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterN
ame:newest-cni-249978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTi
meout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:44:43.259895  260970 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:44:43.260613  260970 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:44:43.260704  260970 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:44:43.276946  260970 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:44:43.277339  260970 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0817 22:44:43.277380  260970 cni.go:84] Creating CNI manager for ""
	I0817 22:44:43.277396  260970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:44:43.277407  260970 start_flags.go:319] config:
	{Name:newest-cni-249978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-249978 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:44:43.277558  260970 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:44:43.279915  260970 out.go:177] * Starting control plane node newest-cni-249978 in cluster newest-cni-249978
	I0817 22:44:43.281775  260970 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:44:43.281829  260970 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0817 22:44:43.281839  260970 cache.go:57] Caching tarball of preloaded images
	I0817 22:44:43.281966  260970 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 22:44:43.281978  260970 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on crio
	I0817 22:44:43.282119  260970 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/config.json ...
	I0817 22:44:43.282297  260970 start.go:365] acquiring machines lock for newest-cni-249978: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:44:43.282340  260970 start.go:369] acquired machines lock for "newest-cni-249978" in 23.186µs
	I0817 22:44:43.282354  260970 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:44:43.282361  260970 fix.go:54] fixHost starting: 
	I0817 22:44:43.282644  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:44:43.282685  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:44:43.297320  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37081
	I0817 22:44:43.297795  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:44:43.298373  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:44:43.298403  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:44:43.298710  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:44:43.298925  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:44:43.299058  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetState
	I0817 22:44:43.300552  260970 fix.go:102] recreateIfNeeded on newest-cni-249978: state=Stopped err=<nil>
	I0817 22:44:43.300581  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	W0817 22:44:43.300770  260970 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:44:43.302893  260970 out.go:177] * Restarting existing kvm2 VM for "newest-cni-249978" ...
	I0817 22:44:43.304291  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Start
	I0817 22:44:43.304476  260970 main.go:141] libmachine: (newest-cni-249978) Ensuring networks are active...
	I0817 22:44:43.305248  260970 main.go:141] libmachine: (newest-cni-249978) Ensuring network default is active
	I0817 22:44:43.305633  260970 main.go:141] libmachine: (newest-cni-249978) Ensuring network mk-newest-cni-249978 is active
	I0817 22:44:43.306043  260970 main.go:141] libmachine: (newest-cni-249978) Getting domain xml...
	I0817 22:44:43.306937  260970 main.go:141] libmachine: (newest-cni-249978) Creating domain...
	I0817 22:44:44.615775  260970 main.go:141] libmachine: (newest-cni-249978) Waiting to get IP...
	I0817 22:44:44.616924  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:44.617345  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:44.617427  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:44.617345  261015 retry.go:31] will retry after 203.552259ms: waiting for machine to come up
	I0817 22:44:44.822966  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:44.823420  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:44.823454  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:44.823355  261015 retry.go:31] will retry after 266.281164ms: waiting for machine to come up
	I0817 22:44:45.090943  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:45.091497  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:45.091530  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:45.091438  261015 retry.go:31] will retry after 432.201323ms: waiting for machine to come up
	I0817 22:44:45.525215  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:45.525824  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:45.525860  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:45.525766  261015 retry.go:31] will retry after 461.389999ms: waiting for machine to come up
	I0817 22:44:45.988602  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:45.989192  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:45.989220  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:45.989093  261015 retry.go:31] will retry after 478.434585ms: waiting for machine to come up
	I0817 22:44:46.468774  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:46.469265  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:46.469293  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:46.469227  261015 retry.go:31] will retry after 911.517038ms: waiting for machine to come up
	I0817 22:44:47.382433  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:47.382991  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:47.383014  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:47.382943  261015 retry.go:31] will retry after 1.027658145s: waiting for machine to come up
	I0817 22:44:48.412170  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:48.412645  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:48.412683  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:48.412577  261015 retry.go:31] will retry after 1.352762707s: waiting for machine to come up
	I0817 22:44:49.767019  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:49.767563  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:49.767591  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:49.767490  261015 retry.go:31] will retry after 1.303613536s: waiting for machine to come up
	I0817 22:44:51.073084  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:51.073759  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:51.073824  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:51.073632  261015 retry.go:31] will retry after 1.855244581s: waiting for machine to come up
	I0817 22:44:52.931179  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:52.931747  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:52.931780  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:52.931693  261015 retry.go:31] will retry after 2.689274347s: waiting for machine to come up
	I0817 22:44:55.623204  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:55.623756  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:55.623786  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:55.623696  261015 retry.go:31] will retry after 2.411014909s: waiting for machine to come up
	I0817 22:44:58.036538  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:58.036942  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:58.036969  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:58.036897  261015 retry.go:31] will retry after 4.036401915s: waiting for machine to come up
	I0817 22:45:02.077774  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.078267  260970 main.go:141] libmachine: (newest-cni-249978) Found IP for machine: 192.168.72.79
	I0817 22:45:02.078285  260970 main.go:141] libmachine: (newest-cni-249978) Reserving static IP address...
	I0817 22:45:02.078334  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has current primary IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.078796  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "newest-cni-249978", mac: "52:54:00:88:0c:ac", ip: "192.168.72.79"} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.078831  260970 main.go:141] libmachine: (newest-cni-249978) Reserved static IP address: 192.168.72.79
	I0817 22:45:02.078850  260970 main.go:141] libmachine: (newest-cni-249978) DBG | skip adding static IP to network mk-newest-cni-249978 - found existing host DHCP lease matching {name: "newest-cni-249978", mac: "52:54:00:88:0c:ac", ip: "192.168.72.79"}
	I0817 22:45:02.078868  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Getting to WaitForSSH function...
	I0817 22:45:02.078890  260970 main.go:141] libmachine: (newest-cni-249978) Waiting for SSH to be available...
	I0817 22:45:02.080999  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.081341  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.081375  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.081500  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Using SSH client type: external
	I0817 22:45:02.081531  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa (-rw-------)
	I0817 22:45:02.081581  260970 main.go:141] libmachine: (newest-cni-249978) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:45:02.081608  260970 main.go:141] libmachine: (newest-cni-249978) DBG | About to run SSH command:
	I0817 22:45:02.081622  260970 main.go:141] libmachine: (newest-cni-249978) DBG | exit 0
	I0817 22:45:02.178301  260970 main.go:141] libmachine: (newest-cni-249978) DBG | SSH cmd err, output: <nil>: 
	I0817 22:45:02.178690  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetConfigRaw
	I0817 22:45:02.179379  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:45:02.182229  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.182629  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.182674  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.182955  260970 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/config.json ...
	I0817 22:45:02.183160  260970 machine.go:88] provisioning docker machine ...
	I0817 22:45:02.183180  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:02.183423  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:45:02.183602  260970 buildroot.go:166] provisioning hostname "newest-cni-249978"
	I0817 22:45:02.183631  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:45:02.183814  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.186073  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.186481  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.186531  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.186651  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:02.186960  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.187141  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.187321  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:02.187507  260970 main.go:141] libmachine: Using SSH client type: native
	I0817 22:45:02.187941  260970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:45:02.187957  260970 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-249978 && echo "newest-cni-249978" | sudo tee /etc/hostname
	I0817 22:45:02.335697  260970 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-249978
	
	I0817 22:45:02.335736  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.339187  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.339540  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.339569  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.339752  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:02.339992  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.340214  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.340389  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:02.340601  260970 main.go:141] libmachine: Using SSH client type: native
	I0817 22:45:02.341232  260970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:45:02.341264  260970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-249978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-249978/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-249978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:45:02.487457  260970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:45:02.487487  260970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:45:02.487513  260970 buildroot.go:174] setting up certificates
	I0817 22:45:02.487522  260970 provision.go:83] configureAuth start
	I0817 22:45:02.487531  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:45:02.487871  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:45:02.490546  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.490939  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.490982  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.491313  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.493660  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.493961  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.493992  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.494169  260970 provision.go:138] copyHostCerts
	I0817 22:45:02.494226  260970 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:45:02.494237  260970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:45:02.494345  260970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:45:02.494525  260970 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:45:02.494539  260970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:45:02.494594  260970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:45:02.494697  260970 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:45:02.494720  260970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:45:02.494767  260970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:45:02.494876  260970 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.newest-cni-249978 san=[192.168.72.79 192.168.72.79 localhost 127.0.0.1 minikube newest-cni-249978]
	I0817 22:45:02.594463  260970 provision.go:172] copyRemoteCerts
	I0817 22:45:02.594522  260970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:45:02.594550  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.597587  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.597902  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.597938  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.598110  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:02.598318  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.598517  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:02.598643  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:02.696027  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:45:02.721498  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 22:45:02.747688  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 22:45:02.772074  260970 provision.go:86] duration metric: configureAuth took 284.534984ms
	I0817 22:45:02.772114  260970 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:45:02.772367  260970 config.go:182] Loaded profile config "newest-cni-249978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:45:02.772458  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.774997  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.775304  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.775354  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.775496  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:02.775685  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.775862  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.776019  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:02.776169  260970 main.go:141] libmachine: Using SSH client type: native
	I0817 22:45:02.776757  260970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:45:02.776781  260970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:45:03.120134  260970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:45:03.120171  260970 machine.go:91] provisioned docker machine in 936.996396ms
	I0817 22:45:03.120187  260970 start.go:300] post-start starting for "newest-cni-249978" (driver="kvm2")
	I0817 22:45:03.120232  260970 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:45:03.120275  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.120635  260970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:45:03.120676  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:03.123936  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.124434  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.124468  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.124641  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:03.124859  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.125059  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:03.125245  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:03.225706  260970 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:45:03.230818  260970 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:45:03.230849  260970 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:45:03.230945  260970 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:45:03.231038  260970 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:45:03.231161  260970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:45:03.241473  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:45:03.265403  260970 start.go:303] post-start completed in 145.198828ms
	I0817 22:45:03.265431  260970 fix.go:56] fixHost completed within 19.983069334s
	I0817 22:45:03.265454  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:03.268451  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.268986  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.269019  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.269222  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:03.269467  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.269646  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.269804  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:03.270037  260970 main.go:141] libmachine: Using SSH client type: native
	I0817 22:45:03.270468  260970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:45:03.270481  260970 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:45:03.406949  260970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692312303.352963156
	
	I0817 22:45:03.406979  260970 fix.go:206] guest clock: 1692312303.352963156
	I0817 22:45:03.406989  260970 fix.go:219] Guest: 2023-08-17 22:45:03.352963156 +0000 UTC Remote: 2023-08-17 22:45:03.265434625 +0000 UTC m=+20.138041324 (delta=87.528531ms)
	I0817 22:45:03.407017  260970 fix.go:190] guest clock delta is within tolerance: 87.528531ms
	I0817 22:45:03.407023  260970 start.go:83] releasing machines lock for "newest-cni-249978", held for 20.124673064s
	I0817 22:45:03.407048  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.407372  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:45:03.410189  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.410636  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.410673  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.410811  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.411303  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.411493  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.411584  260970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:45:03.411621  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:03.411730  260970 ssh_runner.go:195] Run: cat /version.json
	I0817 22:45:03.411760  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:03.414603  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.414659  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.414946  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.414991  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.415021  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.415039  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.415119  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:03.415221  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:03.415308  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.415378  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.415460  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:03.415525  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:03.415585  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:03.415624  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:03.507885  260970 ssh_runner.go:195] Run: systemctl --version
	I0817 22:45:03.543641  260970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:45:03.694020  260970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:45:03.700380  260970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:45:03.700479  260970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:45:03.716601  260970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:45:03.716627  260970 start.go:466] detecting cgroup driver to use...
	I0817 22:45:03.716691  260970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:45:03.732412  260970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:45:03.746856  260970 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:45:03.746928  260970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:45:03.762156  260970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:45:03.776736  260970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:45:03.894870  260970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:45:04.032577  260970 docker.go:212] disabling docker service ...
	I0817 22:45:04.032695  260970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:45:04.047824  260970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:45:04.060861  260970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:45:04.194318  260970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:45:04.315524  260970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:45:04.329671  260970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:45:04.347620  260970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:45:04.347706  260970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:45:04.358932  260970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:45:04.359029  260970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:45:04.371597  260970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:45:04.386184  260970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:45:04.397930  260970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:45:04.409267  260970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:45:04.419017  260970 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:45:04.419077  260970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:45:04.435273  260970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:45:04.445782  260970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:45:04.581505  260970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:45:04.767501  260970 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:45:04.767594  260970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:45:04.772889  260970 start.go:534] Will wait 60s for crictl version
	I0817 22:45:04.772965  260970 ssh_runner.go:195] Run: which crictl
	I0817 22:45:04.776950  260970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:45:04.814512  260970 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:45:04.814620  260970 ssh_runner.go:195] Run: crio --version
	I0817 22:45:04.875735  260970 ssh_runner.go:195] Run: crio --version
	I0817 22:45:04.931874  260970 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.1 ...
	I0817 22:45:04.933439  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:45:04.936536  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:04.936945  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:04.937009  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:04.937258  260970 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0817 22:45:04.941737  260970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:45:04.957443  260970 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0817 22:45:04.959390  260970 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:45:04.959469  260970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:45:04.993033  260970 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0817 22:45:04.993115  260970 ssh_runner.go:195] Run: which lz4
	I0817 22:45:04.997349  260970 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:45:05.002002  260970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:45:05.002048  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457054966 bytes)
	I0817 22:45:06.922685  260970 crio.go:444] Took 1.925387 seconds to copy over tarball
	I0817 22:45:06.922760  260970 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:45:09.931655  260970 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.008864889s)
	I0817 22:45:09.931691  260970 crio.go:451] Took 3.008980 seconds to extract the tarball
	I0817 22:45:09.931702  260970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:45:09.973250  260970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:45:10.028789  260970 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:45:10.028814  260970 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:45:10.028938  260970 ssh_runner.go:195] Run: crio config
	I0817 22:45:10.098364  260970 cni.go:84] Creating CNI manager for ""
	I0817 22:45:10.098390  260970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:45:10.098413  260970 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0817 22:45:10.098437  260970 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.79 APIServerPort:8443 KubernetesVersion:v1.28.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-249978 NodeName:newest-cni-249978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.72.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:45:10.098621  260970 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-249978"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:45:10.098702  260970 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-249978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-249978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:45:10.098771  260970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0-rc.1
	I0817 22:45:10.108369  260970 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:45:10.108445  260970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:45:10.117854  260970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (418 bytes)
	I0817 22:45:10.135972  260970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 22:45:10.154210  260970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0817 22:45:10.172827  260970 ssh_runner.go:195] Run: grep 192.168.72.79	control-plane.minikube.internal$ /etc/hosts
	I0817 22:45:10.177124  260970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.79	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:45:10.191426  260970 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978 for IP: 192.168.72.79
	I0817 22:45:10.191473  260970 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:45:10.191699  260970 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:45:10.191753  260970 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:45:10.191841  260970 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/client.key
	I0817 22:45:10.191906  260970 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key.7538f06f
	I0817 22:45:10.191942  260970 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.key
	I0817 22:45:10.192042  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:45:10.192069  260970 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:45:10.192081  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:45:10.192105  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:45:10.192128  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:45:10.192152  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:45:10.192190  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:45:10.192834  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:45:10.219931  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:45:10.246494  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:45:10.272035  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:45:10.296955  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:45:10.322450  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:45:10.349273  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:45:10.378919  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:45:10.407142  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:45:10.435548  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:45:10.465248  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:45:10.493131  260970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:45:10.512756  260970 ssh_runner.go:195] Run: openssl version
	I0817 22:45:10.518739  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:45:10.530250  260970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:45:10.535751  260970 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:45:10.535833  260970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:45:10.542007  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:45:10.553194  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:45:10.563564  260970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:45:10.568930  260970 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:45:10.569000  260970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:45:10.575496  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:45:10.587239  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:45:10.598503  260970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:45:10.604091  260970 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:45:10.604154  260970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:45:10.610901  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:45:10.622127  260970 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:45:10.627894  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:45:10.634564  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:45:10.641219  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:45:10.647845  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:45:10.654490  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:45:10.661547  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:45:10.668253  260970 kubeadm.go:404] StartCluster: {Name:newest-cni-249978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-249
978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:45:10.668454  260970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:45:10.668523  260970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:45:10.705753  260970 cri.go:89] found id: ""
	I0817 22:45:10.705823  260970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:45:10.715957  260970 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:45:10.715986  260970 kubeadm.go:636] restartCluster start
	I0817 22:45:10.716050  260970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:45:10.726339  260970 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:10.727270  260970 kubeconfig.go:135] verify returned: extract IP: "newest-cni-249978" does not appear in /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:45:10.727755  260970 kubeconfig.go:146] "newest-cni-249978" context is missing from /home/jenkins/minikube-integration/16865-203458/kubeconfig - will repair!
	I0817 22:45:10.728512  260970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:45:10.824646  260970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:45:10.834194  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:10.834267  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:10.846389  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:10.846417  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:10.846479  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:10.858271  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:11.359025  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:11.359131  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:11.372048  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:11.858668  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:11.858773  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:11.871223  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:12.358700  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:12.358781  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:12.370531  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:12.859124  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:12.859235  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:12.873849  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 22:24:26 UTC, ends at Thu 2023-08-17 22:45:13 UTC. --
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.494821396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4dd16007-2acb-4327-b28c-638d77fa4927 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.495162398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4dd16007-2acb-4327-b28c-638d77fa4927 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.536410978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5abf5f40-0be4-4e54-b108-497ad0c50ed8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.536481729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5abf5f40-0be4-4e54-b108-497ad0c50ed8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.536662701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5abf5f40-0be4-4e54-b108-497ad0c50ed8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.579244608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0dd36dd7-7749-4e17-b2f3-5a7cb363e384 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.579310918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0dd36dd7-7749-4e17-b2f3-5a7cb363e384 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.579518878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0dd36dd7-7749-4e17-b2f3-5a7cb363e384 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.618356805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3676df88-48cd-4d81-8a80-7fcf4f835509 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.618420803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3676df88-48cd-4d81-8a80-7fcf4f835509 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.618592588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3676df88-48cd-4d81-8a80-7fcf4f835509 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.654035463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8e307d3-c839-4ba2-a215-c3c2ac88ab50 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.654157123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8e307d3-c839-4ba2-a215-c3c2ac88ab50 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.654365132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8e307d3-c839-4ba2-a215-c3c2ac88ab50 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.694129914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=28603d6e-6331-4ba7-8d6e-33ce920b2328 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.694204345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=28603d6e-6331-4ba7-8d6e-33ce920b2328 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.694361833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=28603d6e-6331-4ba7-8d6e-33ce920b2328 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.728519723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5d50d969-a786-4f6f-99cc-2fb8fc283245 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.728584941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5d50d969-a786-4f6f-99cc-2fb8fc283245 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.728774098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5d50d969-a786-4f6f-99cc-2fb8fc283245 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.750465181Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=14b2f94f-d578-4708-a33f-7f81b07dc4ff name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.750677596Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:41902ac4ca2188ad7c17f37c77cc0fa8edca1d5988ea01cfaf172da6caa95299,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-9zstm,Uid:a881915b-d7e9-431f-8666-d225a4720a54,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311415155281983,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-9zstm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a881915b-d7e9-431f-8666-d225a4720a54,k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:30:14.817295023Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:43cb4a9a-10c6-43f7-8d58-7348e2510947,Name
space:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311415014355747,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volume
s\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-17T22:30:14.672331557Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&PodSandboxMetadata{Name:kube-proxy-2f6jz,Uid:c82a9796-e23b-4823-a3f2-d180b9aa866f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311412616039691,Labels:map[string]string{controller-revision-hash: 86cc8bcbf7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:30:11.968763549Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-ghvnx,Uid:6
4d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311412468992743,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:30:12.127321528Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-437183,Uid:da42d2c071cc2dcd5796d4fd0d4f53ff,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311389983080990,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f
53ff,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: da42d2c071cc2dcd5796d4fd0d4f53ff,kubernetes.io/config.seen: 2023-08-17T22:29:49.441394004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-437183,Uid:a47cd3fe64cfb9fe8eca6552afd070ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311389973048919,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a47cd3fe64cfb9fe8eca6552afd070ae,kubernetes.io/config.seen: 2023-08-17T22:29:49.441393196Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9199bfa59e5a8194f74d
713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-437183,Uid:dae663a4d1e550c567189ea849fd32b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311389953437648,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.186:8443,kubernetes.io/config.hash: dae663a4d1e550c567189ea849fd32b5,kubernetes.io/config.seen: 2023-08-17T22:29:49.441391954Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-437183,Uid:770caf08f9100acb248b5bf9c4c26972,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:16
92311389905105959,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.186:2379,kubernetes.io/config.hash: 770caf08f9100acb248b5bf9c4c26972,kubernetes.io/config.seen: 2023-08-17T22:29:49.441387737Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=14b2f94f-d578-4708-a33f-7f81b07dc4ff name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.751358304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e5ab8feb-2c26-47f3-a5a3-040be1c45304 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.751407557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e5ab8feb-2c26-47f3-a5a3-040be1c45304 name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:45:13 embed-certs-437183 crio[726]: time="2023-08-17 22:45:13.751610707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577,PodSandboxId:a883fa663ed47cfa02074bfa0c00383d853f9985ebae0fea513507134c21db8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311416690818089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cb4a9a-10c6-43f7-8d58-7348e2510947,},Annotations:map[string]string{io.kubernetes.container.hash: 6844c1ed,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c,PodSandboxId:d1a15a97e0abc6db4041fb09640bd45d33194113bbf04e481c83e70745982bc2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311416527174060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f6jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a9796-e23b-4823-a3f2-d180b9aa866f,},Annotations:map[string]string{io.kubernetes.container.hash: c22f1c12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48,PodSandboxId:6fff8c5415a66fd22d83decbb404e7de97820defe7b3d5e86dc6ec31bd7181de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311415485813058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-ghvnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b,},Annotations:map[string]string{io.kubernetes.container.hash: f6773c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074,PodSandboxId:1ab6dfcb94f4b0bd6e700d0b00bb99cb56745fda6d7971f50b2ee47dec59142c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311391167088978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770caf08f9100acb248b5bf9c4c26972,},Ann
otations:map[string]string{io.kubernetes.container.hash: e9f3f82a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e,PodSandboxId:897ac10edd52d4c86ff64e3ec06baeefe06d0583675a8e922d141b07ec51485e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311391031149743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da42d2c071cc2dcd5796d4fd0d4f53ff,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19,PodSandboxId:9199bfa59e5a8194f74d713550d0b7dd135f0e20e61c833a886d1ed012cb95d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311390551673206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae663a4d1e550c567189ea849fd32b5,},Annotations:map[string]
string{io.kubernetes.container.hash: 57ca1ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918,PodSandboxId:507891bfee4925db2e233e99a955d81dfac5280d15a940586aba435b0eb75258,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311390425064736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-437183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47cd3fe64cfb9fe8eca6552afd070ae
,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e5ab8feb-2c26-47f3-a5a3-040be1c45304 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	5a080e9202ff1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   a883fa663ed47
	70009069c37b7       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   14 minutes ago      Running             kube-proxy                0                   d1a15a97e0abc
	c78fe32267075       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   6fff8c5415a66
	1adcff7bb1e0f       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   15 minutes ago      Running             etcd                      2                   1ab6dfcb94f4b
	22c0de40f713b       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   15 minutes ago      Running             kube-scheduler            2                   897ac10edd52d
	7b97bbae5144d       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   15 minutes ago      Running             kube-apiserver            2                   9199bfa59e5a8
	5c17e7df1c775       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   15 minutes ago      Running             kube-controller-manager   2                   507891bfee492
	
	* 
	* ==> coredns [c78fe32267075aa3d80f7ebc30ff3c390e005665f3c4b3570f1daf3b7e50ce48] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-437183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-437183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=embed-certs-437183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T22_29_59_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 22:29:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-437183
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 22:45:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 22:40:32 +0000   Thu, 17 Aug 2023 22:29:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 22:40:32 +0000   Thu, 17 Aug 2023 22:29:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 22:40:32 +0000   Thu, 17 Aug 2023 22:29:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 22:40:32 +0000   Thu, 17 Aug 2023 22:29:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    embed-certs-437183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e92f5f8fc5a04000a28188bccb075951
	  System UUID:                e92f5f8f-c5a0-4000-a281-88bccb075951
	  Boot ID:                    f4abf1b1-764f-4721-bf35-e191b40359b8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-ghvnx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-437183                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-437183             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-437183    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-2f6jz                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-437183             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-74d5c6b9c-9zstm                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-437183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-437183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-437183 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node embed-certs-437183 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node embed-certs-437183 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-437183 event: Registered Node embed-certs-437183 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug17 22:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072947] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.403841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.589661] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154794] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.493867] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.453439] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.136518] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.171727] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.129850] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.268690] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +17.640819] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[Aug17 22:25] hrtimer: interrupt took 6556907 ns
	[  +6.601452] kauditd_printk_skb: 19 callbacks suppressed
	[Aug17 22:29] kauditd_printk_skb: 4 callbacks suppressed
	[ +29.401071] systemd-fstab-generator[3564]: Ignoring "noauto" for root device
	[  +9.841115] systemd-fstab-generator[3892]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [1adcff7bb1e0f3f61c8975a90c07cab9ae4c21586b5c66cb87f792c7acd3a074] <==
	* {"level":"info","ts":"2023-08-17T22:29:53.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became leader at term 2"}
	{"level":"info","ts":"2023-08-17T22:29:53.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bfd5d64eb00b2d5 elected leader 1bfd5d64eb00b2d5 at term 2"}
	{"level":"info","ts":"2023-08-17T22:29:53.242Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:29:53.244Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1bfd5d64eb00b2d5","local-member-attributes":"{Name:embed-certs-437183 ClientURLs:[https://192.168.39.186:2379]}","request-path":"/0/members/1bfd5d64eb00b2d5/attributes","cluster-id":"7d06a36b1777ee5c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T22:29:53.244Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T22:29:53.246Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-17T22:29:53.246Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.186:2379"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T22:29:53.247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T22:29:53.256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-17T22:39:53.551Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2023-08-17T22:39:53.562Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":682,"took":"6.353977ms","hash":2616085045}
	{"level":"info","ts":"2023-08-17T22:39:53.562Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2616085045,"revision":682,"compact-revision":-1}
	{"level":"warn","ts":"2023-08-17T22:44:00.738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.224729ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:44:00.739Z","caller":"traceutil/trace.go:171","msg":"trace[605343495] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1126; }","duration":"216.820585ms","start":"2023-08-17T22:44:00.522Z","end":"2023-08-17T22:44:00.739Z","steps":["trace[605343495] 'range keys from in-memory index tree'  (duration: 216.101209ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:44:01.637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.987328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:44:01.637Z","caller":"traceutil/trace.go:171","msg":"trace[26189902] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1127; }","duration":"115.122418ms","start":"2023-08-17T22:44:01.522Z","end":"2023-08-17T22:44:01.637Z","steps":["trace[26189902] 'range keys from in-memory index tree'  (duration: 114.759722ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T22:44:53.566Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2023-08-17T22:44:53.568Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":925,"took":"1.437105ms","hash":2200639315}
	{"level":"info","ts":"2023-08-17T22:44:53.569Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2200639315,"revision":925,"compact-revision":682}
	{"level":"info","ts":"2023-08-17T22:45:11.454Z","caller":"traceutil/trace.go:171","msg":"trace[574638169] transaction","detail":"{read_only:false; response_revision:1183; number_of_response:1; }","duration":"497.006909ms","start":"2023-08-17T22:45:10.957Z","end":"2023-08-17T22:45:11.454Z","steps":["trace[574638169] 'process raft request'  (duration: 496.900878ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:45:11.455Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:45:10.957Z","time spent":"497.203993ms","remote":"127.0.0.1:60088","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1182 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	* 
	* ==> kernel <==
	*  22:45:14 up 20 min,  0 users,  load average: 0.14, 0.16, 0.21
	Linux embed-certs-437183 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7b97bbae5144d67635fe29dee51cc8e1d2b02c060d24d7b72ce95dbc711d1a19] <==
	* I0817 22:42:55.225481       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:42:56.359545       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:42:56.359708       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:42:56.359742       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:42:56.362095       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:42:56.362227       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:42:56.362246       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:43:55.225500       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.245.1:443: connect: connection refused
	I0817 22:43:55.225734       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0817 22:44:55.225080       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.245.1:443: connect: connection refused
	I0817 22:44:55.225181       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0817 22:44:55.363307       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.245.1:443: connect: connection refused
	I0817 22:44:55.363425       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:44:56.364099       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:44:56.364209       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:44:56.364229       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0817 22:44:56.364365       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:44:56.364447       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:44:56.365797       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:45:11.456653       1 trace.go:219] Trace[131909104]: "Update" accept:application/json, */*,audit-id:c814bbb6-c472-4c9b-b4ce-fea14abb77dc,client:192.168.39.186,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (17-Aug-2023 22:45:10.955) (total time: 501ms):
	Trace[131909104]: ["GuaranteedUpdate etcd3" audit-id:c814bbb6-c472-4c9b-b4ce-fea14abb77dc,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 500ms (22:45:10.955)
	Trace[131909104]:  ---"Txn call completed" 499ms (22:45:11.456)]
	Trace[131909104]: [501.386212ms] [501.386212ms] END
	
	* 
	* ==> kube-controller-manager [5c17e7df1c7750617119c3ebaf1cb918edf38bc5cb61fa040db4d54062977918] <==
	* W0817 22:39:11.541408       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:39:41.049524       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:39:41.550039       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:40:11.057518       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:40:11.560741       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:40:41.063214       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:40:41.570339       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:41:11.071320       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:41:11.580558       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:41:41.076796       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:41:41.591457       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:42:11.084269       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:42:11.607162       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:42:41.090211       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:42:41.617899       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:43:11.097806       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:43:11.630144       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:43:41.105849       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:43:41.644151       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:44:11.114701       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:44:11.654463       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:44:41.120748       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:44:41.665660       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:45:11.127786       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:45:11.674875       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [70009069c37b77881d13f37fe4043535c1cfb24986365575979297af1611f93c] <==
	* I0817 22:30:17.088623       1 node.go:141] Successfully retrieved node IP: 192.168.39.186
	I0817 22:30:17.088757       1 server_others.go:110] "Detected node IP" address="192.168.39.186"
	I0817 22:30:17.088791       1 server_others.go:554] "Using iptables proxy"
	I0817 22:30:17.134247       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0817 22:30:17.134313       1 server_others.go:192] "Using iptables Proxier"
	I0817 22:30:17.134367       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 22:30:17.135077       1 server.go:658] "Version info" version="v1.27.4"
	I0817 22:30:17.135116       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:30:17.136274       1 config.go:188] "Starting service config controller"
	I0817 22:30:17.136348       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 22:30:17.136400       1 config.go:97] "Starting endpoint slice config controller"
	I0817 22:30:17.136434       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 22:30:17.137447       1 config.go:315] "Starting node config controller"
	I0817 22:30:17.137482       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 22:30:17.236834       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 22:30:17.236837       1 shared_informer.go:318] Caches are synced for service config
	I0817 22:30:17.237607       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [22c0de40f713bc2c5e29212a325e710e4017326b44b96ad763f211128e8b492e] <==
	* W0817 22:29:56.197194       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 22:29:56.197253       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0817 22:29:56.269111       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 22:29:56.269200       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0817 22:29:56.512582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 22:29:56.512637       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0817 22:29:56.516621       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 22:29:56.516649       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0817 22:29:56.574311       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 22:29:56.574412       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0817 22:29:56.615114       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 22:29:56.615221       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0817 22:29:56.623789       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 22:29:56.623875       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0817 22:29:56.623889       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 22:29:56.624000       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0817 22:29:56.689355       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 22:29:56.689452       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0817 22:29:56.707896       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 22:29:56.708006       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0817 22:29:56.712322       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 22:29:56.712386       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0817 22:29:56.803734       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 22:29:56.803792       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 22:29:59.355271       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 22:24:26 UTC, ends at Thu 2023-08-17 22:45:14 UTC. --
	Aug 17 22:42:55 embed-certs-437183 kubelet[3899]: E0817 22:42:55.420675    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:42:59 embed-certs-437183 kubelet[3899]: E0817 22:42:59.505779    3899 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:42:59 embed-certs-437183 kubelet[3899]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:42:59 embed-certs-437183 kubelet[3899]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:42:59 embed-certs-437183 kubelet[3899]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:43:07 embed-certs-437183 kubelet[3899]: E0817 22:43:07.420030    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:43:22 embed-certs-437183 kubelet[3899]: E0817 22:43:22.419784    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:43:34 embed-certs-437183 kubelet[3899]: E0817 22:43:34.420188    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:43:48 embed-certs-437183 kubelet[3899]: E0817 22:43:48.420058    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:43:59 embed-certs-437183 kubelet[3899]: E0817 22:43:59.507372    3899 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:43:59 embed-certs-437183 kubelet[3899]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:43:59 embed-certs-437183 kubelet[3899]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:43:59 embed-certs-437183 kubelet[3899]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:44:00 embed-certs-437183 kubelet[3899]: E0817 22:44:00.420038    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:44:14 embed-certs-437183 kubelet[3899]: E0817 22:44:14.419630    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:44:26 embed-certs-437183 kubelet[3899]: E0817 22:44:26.420127    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:44:38 embed-certs-437183 kubelet[3899]: E0817 22:44:38.420249    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:44:49 embed-certs-437183 kubelet[3899]: E0817 22:44:49.420658    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:44:59 embed-certs-437183 kubelet[3899]: E0817 22:44:59.505515    3899 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:44:59 embed-certs-437183 kubelet[3899]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:44:59 embed-certs-437183 kubelet[3899]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:44:59 embed-certs-437183 kubelet[3899]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:44:59 embed-certs-437183 kubelet[3899]: E0817 22:44:59.522127    3899 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Aug 17 22:45:01 embed-certs-437183 kubelet[3899]: E0817 22:45:01.420991    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	Aug 17 22:45:13 embed-certs-437183 kubelet[3899]: E0817 22:45:13.420514    3899 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-9zstm" podUID=a881915b-d7e9-431f-8666-d225a4720a54
	
	* 
	* ==> storage-provisioner [5a080e9202ff1754ed4b0d31c0edf8127fa7758a87dfe991abdd03da554ff577] <==
	* I0817 22:30:16.993753       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 22:30:17.012134       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 22:30:17.012256       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 22:30:17.036047       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 22:30:17.036595       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-437183_7b43298b-a344-4382-9361-149305b30baa!
	I0817 22:30:17.041028       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f9b2b7ab-b416-4200-93c3-29398470d58a", APIVersion:"v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-437183_7b43298b-a344-4382-9361-149305b30baa became leader
	I0817 22:30:17.141279       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-437183_7b43298b-a344-4382-9361-149305b30baa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-437183 -n embed-certs-437183
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-437183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-9zstm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-437183 describe pod metrics-server-74d5c6b9c-9zstm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-437183 describe pod metrics-server-74d5c6b9c-9zstm: exit status 1 (72.12047ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-9zstm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-437183 describe pod metrics-server-74d5c6b9c-9zstm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (352.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (368.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0817 22:40:20.385001  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:40:31.664916  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 22:40:50.284223  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:40:55.683307  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-08-17 22:45:52.045891345 +0000 UTC m=+5725.686659077
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-321287 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-321287 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.816µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-321287 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-321287 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-321287 logs -n 25: (1.117658445s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-437183            | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-321287  | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-294781             | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-525875                  | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:29 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-437183                 | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-321287       | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC | 17 Aug 23 22:30 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:43 UTC | 17 Aug 23 22:43 UTC |
	| start   | -p newest-cni-249978 --memory=2200 --alsologtostderr   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:43 UTC | 17 Aug 23 22:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC | 17 Aug 23 22:44 UTC |
	| addons  | enable metrics-server -p newest-cni-249978             | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC | 17 Aug 23 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-249978                                   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC | 17 Aug 23 22:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-249978                  | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC | 17 Aug 23 22:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-249978 --memory=2200 --alsologtostderr   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:44 UTC | 17 Aug 23 22:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:45 UTC | 17 Aug 23 22:45 UTC |
	| ssh     | -p newest-cni-249978 sudo                              | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:45 UTC | 17 Aug 23 22:45 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-249978                                   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:45 UTC | 17 Aug 23 22:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-249978                                   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:45 UTC | 17 Aug 23 22:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-249978                                   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:45 UTC | 17 Aug 23 22:45 UTC |
	| delete  | -p newest-cni-249978                                   | newest-cni-249978            | jenkins | v1.31.2 | 17 Aug 23 22:45 UTC | 17 Aug 23 22:45 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 22:44:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 22:44:43.163827  260970 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:44:43.163980  260970 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:44:43.163989  260970 out.go:309] Setting ErrFile to fd 2...
	I0817 22:44:43.163994  260970 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:44:43.164196  260970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:44:43.164805  260970 out.go:303] Setting JSON to false
	I0817 22:44:43.165716  260970 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26808,"bootTime":1692285475,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:44:43.165779  260970 start.go:138] virtualization: kvm guest
	I0817 22:44:43.169063  260970 out.go:177] * [newest-cni-249978] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:44:43.171260  260970 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:44:43.171292  260970 notify.go:220] Checking for updates...
	I0817 22:44:43.174577  260970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:44:43.176301  260970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:44:43.179543  260970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:44:43.181158  260970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:44:43.182542  260970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:44:43.184414  260970 config.go:182] Loaded profile config "newest-cni-249978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:44:43.184851  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:44:43.184918  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:44:43.200017  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0817 22:44:43.200457  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:44:43.201092  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:44:43.201122  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:44:43.201494  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:44:43.201684  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:44:43.201984  260970 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:44:43.202330  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:44:43.202372  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:44:43.217738  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
	I0817 22:44:43.218257  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:44:43.218854  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:44:43.218876  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:44:43.219281  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:44:43.219503  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:44:43.258209  260970 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:44:43.259733  260970 start.go:298] selected driver: kvm2
	I0817 22:44:43.259745  260970 start.go:902] validating driver "kvm2" against &{Name:newest-cni-249978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterN
ame:newest-cni-249978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTi
meout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:44:43.259895  260970 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:44:43.260613  260970 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:44:43.260704  260970 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:44:43.276946  260970 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:44:43.277339  260970 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0817 22:44:43.277380  260970 cni.go:84] Creating CNI manager for ""
	I0817 22:44:43.277396  260970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:44:43.277407  260970 start_flags.go:319] config:
	{Name:newest-cni-249978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-249978 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:44:43.277558  260970 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:44:43.279915  260970 out.go:177] * Starting control plane node newest-cni-249978 in cluster newest-cni-249978
	I0817 22:44:43.281775  260970 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:44:43.281829  260970 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0817 22:44:43.281839  260970 cache.go:57] Caching tarball of preloaded images
	I0817 22:44:43.281966  260970 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 22:44:43.281978  260970 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on crio
	I0817 22:44:43.282119  260970 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/config.json ...
	I0817 22:44:43.282297  260970 start.go:365] acquiring machines lock for newest-cni-249978: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:44:43.282340  260970 start.go:369] acquired machines lock for "newest-cni-249978" in 23.186µs
	I0817 22:44:43.282354  260970 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:44:43.282361  260970 fix.go:54] fixHost starting: 
	I0817 22:44:43.282644  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:44:43.282685  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:44:43.297320  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37081
	I0817 22:44:43.297795  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:44:43.298373  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:44:43.298403  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:44:43.298710  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:44:43.298925  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:44:43.299058  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetState
	I0817 22:44:43.300552  260970 fix.go:102] recreateIfNeeded on newest-cni-249978: state=Stopped err=<nil>
	I0817 22:44:43.300581  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	W0817 22:44:43.300770  260970 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:44:43.302893  260970 out.go:177] * Restarting existing kvm2 VM for "newest-cni-249978" ...
	I0817 22:44:43.304291  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Start
	I0817 22:44:43.304476  260970 main.go:141] libmachine: (newest-cni-249978) Ensuring networks are active...
	I0817 22:44:43.305248  260970 main.go:141] libmachine: (newest-cni-249978) Ensuring network default is active
	I0817 22:44:43.305633  260970 main.go:141] libmachine: (newest-cni-249978) Ensuring network mk-newest-cni-249978 is active
	I0817 22:44:43.306043  260970 main.go:141] libmachine: (newest-cni-249978) Getting domain xml...
	I0817 22:44:43.306937  260970 main.go:141] libmachine: (newest-cni-249978) Creating domain...
	I0817 22:44:44.615775  260970 main.go:141] libmachine: (newest-cni-249978) Waiting to get IP...
	I0817 22:44:44.616924  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:44.617345  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:44.617427  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:44.617345  261015 retry.go:31] will retry after 203.552259ms: waiting for machine to come up
	I0817 22:44:44.822966  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:44.823420  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:44.823454  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:44.823355  261015 retry.go:31] will retry after 266.281164ms: waiting for machine to come up
	I0817 22:44:45.090943  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:45.091497  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:45.091530  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:45.091438  261015 retry.go:31] will retry after 432.201323ms: waiting for machine to come up
	I0817 22:44:45.525215  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:45.525824  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:45.525860  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:45.525766  261015 retry.go:31] will retry after 461.389999ms: waiting for machine to come up
	I0817 22:44:45.988602  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:45.989192  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:45.989220  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:45.989093  261015 retry.go:31] will retry after 478.434585ms: waiting for machine to come up
	I0817 22:44:46.468774  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:46.469265  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:46.469293  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:46.469227  261015 retry.go:31] will retry after 911.517038ms: waiting for machine to come up
	I0817 22:44:47.382433  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:47.382991  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:47.383014  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:47.382943  261015 retry.go:31] will retry after 1.027658145s: waiting for machine to come up
	I0817 22:44:48.412170  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:48.412645  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:48.412683  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:48.412577  261015 retry.go:31] will retry after 1.352762707s: waiting for machine to come up
	I0817 22:44:49.767019  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:49.767563  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:49.767591  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:49.767490  261015 retry.go:31] will retry after 1.303613536s: waiting for machine to come up
	I0817 22:44:51.073084  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:51.073759  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:51.073824  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:51.073632  261015 retry.go:31] will retry after 1.855244581s: waiting for machine to come up
	I0817 22:44:52.931179  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:52.931747  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:52.931780  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:52.931693  261015 retry.go:31] will retry after 2.689274347s: waiting for machine to come up
	I0817 22:44:55.623204  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:55.623756  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:55.623786  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:55.623696  261015 retry.go:31] will retry after 2.411014909s: waiting for machine to come up
	I0817 22:44:58.036538  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:44:58.036942  260970 main.go:141] libmachine: (newest-cni-249978) DBG | unable to find current IP address of domain newest-cni-249978 in network mk-newest-cni-249978
	I0817 22:44:58.036969  260970 main.go:141] libmachine: (newest-cni-249978) DBG | I0817 22:44:58.036897  261015 retry.go:31] will retry after 4.036401915s: waiting for machine to come up
	I0817 22:45:02.077774  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.078267  260970 main.go:141] libmachine: (newest-cni-249978) Found IP for machine: 192.168.72.79
	I0817 22:45:02.078285  260970 main.go:141] libmachine: (newest-cni-249978) Reserving static IP address...
	I0817 22:45:02.078334  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has current primary IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.078796  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "newest-cni-249978", mac: "52:54:00:88:0c:ac", ip: "192.168.72.79"} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.078831  260970 main.go:141] libmachine: (newest-cni-249978) Reserved static IP address: 192.168.72.79
	I0817 22:45:02.078850  260970 main.go:141] libmachine: (newest-cni-249978) DBG | skip adding static IP to network mk-newest-cni-249978 - found existing host DHCP lease matching {name: "newest-cni-249978", mac: "52:54:00:88:0c:ac", ip: "192.168.72.79"}
	I0817 22:45:02.078868  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Getting to WaitForSSH function...
	I0817 22:45:02.078890  260970 main.go:141] libmachine: (newest-cni-249978) Waiting for SSH to be available...
	I0817 22:45:02.080999  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.081341  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.081375  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.081500  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Using SSH client type: external
	I0817 22:45:02.081531  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa (-rw-------)
	I0817 22:45:02.081581  260970 main.go:141] libmachine: (newest-cni-249978) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:45:02.081608  260970 main.go:141] libmachine: (newest-cni-249978) DBG | About to run SSH command:
	I0817 22:45:02.081622  260970 main.go:141] libmachine: (newest-cni-249978) DBG | exit 0
	I0817 22:45:02.178301  260970 main.go:141] libmachine: (newest-cni-249978) DBG | SSH cmd err, output: <nil>: 
	I0817 22:45:02.178690  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetConfigRaw
	I0817 22:45:02.179379  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:45:02.182229  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.182629  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.182674  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.182955  260970 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/config.json ...
	I0817 22:45:02.183160  260970 machine.go:88] provisioning docker machine ...
	I0817 22:45:02.183180  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:02.183423  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:45:02.183602  260970 buildroot.go:166] provisioning hostname "newest-cni-249978"
	I0817 22:45:02.183631  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:45:02.183814  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.186073  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.186481  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.186531  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.186651  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:02.186960  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.187141  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.187321  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:02.187507  260970 main.go:141] libmachine: Using SSH client type: native
	I0817 22:45:02.187941  260970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:45:02.187957  260970 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-249978 && echo "newest-cni-249978" | sudo tee /etc/hostname
	I0817 22:45:02.335697  260970 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-249978
	
	I0817 22:45:02.335736  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.339187  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.339540  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.339569  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.339752  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:02.339992  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.340214  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.340389  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:02.340601  260970 main.go:141] libmachine: Using SSH client type: native
	I0817 22:45:02.341232  260970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:45:02.341264  260970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-249978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-249978/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-249978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:45:02.487457  260970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:45:02.487487  260970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:45:02.487513  260970 buildroot.go:174] setting up certificates
	I0817 22:45:02.487522  260970 provision.go:83] configureAuth start
	I0817 22:45:02.487531  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetMachineName
	I0817 22:45:02.487871  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:45:02.490546  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.490939  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.490982  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.491313  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.493660  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.493961  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.493992  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.494169  260970 provision.go:138] copyHostCerts
	I0817 22:45:02.494226  260970 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:45:02.494237  260970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:45:02.494345  260970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:45:02.494525  260970 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:45:02.494539  260970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:45:02.494594  260970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:45:02.494697  260970 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:45:02.494720  260970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:45:02.494767  260970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:45:02.494876  260970 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.newest-cni-249978 san=[192.168.72.79 192.168.72.79 localhost 127.0.0.1 minikube newest-cni-249978]
	I0817 22:45:02.594463  260970 provision.go:172] copyRemoteCerts
	I0817 22:45:02.594522  260970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:45:02.594550  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.597587  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.597902  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.597938  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.598110  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:02.598318  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.598517  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:02.598643  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:02.696027  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:45:02.721498  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 22:45:02.747688  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 22:45:02.772074  260970 provision.go:86] duration metric: configureAuth took 284.534984ms
	I0817 22:45:02.772114  260970 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:45:02.772367  260970 config.go:182] Loaded profile config "newest-cni-249978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:45:02.772458  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:02.774997  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.775304  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:02.775354  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:02.775496  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:02.775685  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.775862  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:02.776019  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:02.776169  260970 main.go:141] libmachine: Using SSH client type: native
	I0817 22:45:02.776757  260970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:45:02.776781  260970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:45:03.120134  260970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:45:03.120171  260970 machine.go:91] provisioned docker machine in 936.996396ms
	I0817 22:45:03.120187  260970 start.go:300] post-start starting for "newest-cni-249978" (driver="kvm2")
	I0817 22:45:03.120232  260970 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:45:03.120275  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.120635  260970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:45:03.120676  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:03.123936  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.124434  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.124468  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.124641  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:03.124859  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.125059  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:03.125245  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:03.225706  260970 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:45:03.230818  260970 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:45:03.230849  260970 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:45:03.230945  260970 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:45:03.231038  260970 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:45:03.231161  260970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:45:03.241473  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:45:03.265403  260970 start.go:303] post-start completed in 145.198828ms
	I0817 22:45:03.265431  260970 fix.go:56] fixHost completed within 19.983069334s
	I0817 22:45:03.265454  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:03.268451  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.268986  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.269019  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.269222  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:03.269467  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.269646  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.269804  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:03.270037  260970 main.go:141] libmachine: Using SSH client type: native
	I0817 22:45:03.270468  260970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I0817 22:45:03.270481  260970 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:45:03.406949  260970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692312303.352963156
	
	I0817 22:45:03.406979  260970 fix.go:206] guest clock: 1692312303.352963156
	I0817 22:45:03.406989  260970 fix.go:219] Guest: 2023-08-17 22:45:03.352963156 +0000 UTC Remote: 2023-08-17 22:45:03.265434625 +0000 UTC m=+20.138041324 (delta=87.528531ms)
	I0817 22:45:03.407017  260970 fix.go:190] guest clock delta is within tolerance: 87.528531ms
	I0817 22:45:03.407023  260970 start.go:83] releasing machines lock for "newest-cni-249978", held for 20.124673064s
	I0817 22:45:03.407048  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.407372  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:45:03.410189  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.410636  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.410673  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.410811  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.411303  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.411493  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:03.411584  260970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:45:03.411621  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:03.411730  260970 ssh_runner.go:195] Run: cat /version.json
	I0817 22:45:03.411760  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:03.414603  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.414659  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.414946  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.414991  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.415021  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:03.415039  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:03.415119  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:03.415221  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:03.415308  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.415378  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:03.415460  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:03.415525  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:03.415585  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:03.415624  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:03.507885  260970 ssh_runner.go:195] Run: systemctl --version
	I0817 22:45:03.543641  260970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:45:03.694020  260970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:45:03.700380  260970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:45:03.700479  260970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:45:03.716601  260970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:45:03.716627  260970 start.go:466] detecting cgroup driver to use...
	I0817 22:45:03.716691  260970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:45:03.732412  260970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:45:03.746856  260970 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:45:03.746928  260970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:45:03.762156  260970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:45:03.776736  260970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:45:03.894870  260970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:45:04.032577  260970 docker.go:212] disabling docker service ...
	I0817 22:45:04.032695  260970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:45:04.047824  260970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:45:04.060861  260970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:45:04.194318  260970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:45:04.315524  260970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:45:04.329671  260970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:45:04.347620  260970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:45:04.347706  260970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:45:04.358932  260970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:45:04.359029  260970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:45:04.371597  260970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:45:04.386184  260970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:45:04.397930  260970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:45:04.409267  260970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:45:04.419017  260970 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:45:04.419077  260970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:45:04.435273  260970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:45:04.445782  260970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:45:04.581505  260970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:45:04.767501  260970 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:45:04.767594  260970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:45:04.772889  260970 start.go:534] Will wait 60s for crictl version
	I0817 22:45:04.772965  260970 ssh_runner.go:195] Run: which crictl
	I0817 22:45:04.776950  260970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:45:04.814512  260970 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:45:04.814620  260970 ssh_runner.go:195] Run: crio --version
	I0817 22:45:04.875735  260970 ssh_runner.go:195] Run: crio --version
	I0817 22:45:04.931874  260970 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.1 ...
	I0817 22:45:04.933439  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetIP
	I0817 22:45:04.936536  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:04.936945  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:04.937009  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:04.937258  260970 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0817 22:45:04.941737  260970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:45:04.957443  260970 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0817 22:45:04.959390  260970 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:45:04.959469  260970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:45:04.993033  260970 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0817 22:45:04.993115  260970 ssh_runner.go:195] Run: which lz4
	I0817 22:45:04.997349  260970 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:45:05.002002  260970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:45:05.002048  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457054966 bytes)
	I0817 22:45:06.922685  260970 crio.go:444] Took 1.925387 seconds to copy over tarball
	I0817 22:45:06.922760  260970 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:45:09.931655  260970 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.008864889s)
	I0817 22:45:09.931691  260970 crio.go:451] Took 3.008980 seconds to extract the tarball
	I0817 22:45:09.931702  260970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:45:09.973250  260970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:45:10.028789  260970 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:45:10.028814  260970 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:45:10.028938  260970 ssh_runner.go:195] Run: crio config
	I0817 22:45:10.098364  260970 cni.go:84] Creating CNI manager for ""
	I0817 22:45:10.098390  260970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:45:10.098413  260970 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0817 22:45:10.098437  260970 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.79 APIServerPort:8443 KubernetesVersion:v1.28.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-249978 NodeName:newest-cni-249978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.72.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:45:10.098621  260970 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-249978"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:45:10.098702  260970 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-249978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-249978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:45:10.098771  260970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0-rc.1
	I0817 22:45:10.108369  260970 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:45:10.108445  260970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:45:10.117854  260970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (418 bytes)
	I0817 22:45:10.135972  260970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 22:45:10.154210  260970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0817 22:45:10.172827  260970 ssh_runner.go:195] Run: grep 192.168.72.79	control-plane.minikube.internal$ /etc/hosts
	I0817 22:45:10.177124  260970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.79	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:45:10.191426  260970 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978 for IP: 192.168.72.79
	I0817 22:45:10.191473  260970 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:45:10.191699  260970 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:45:10.191753  260970 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:45:10.191841  260970 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/client.key
	I0817 22:45:10.191906  260970 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key.7538f06f
	I0817 22:45:10.191942  260970 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.key
	I0817 22:45:10.192042  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:45:10.192069  260970 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:45:10.192081  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:45:10.192105  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:45:10.192128  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:45:10.192152  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:45:10.192190  260970 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:45:10.192834  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:45:10.219931  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:45:10.246494  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:45:10.272035  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/newest-cni-249978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:45:10.296955  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:45:10.322450  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:45:10.349273  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:45:10.378919  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:45:10.407142  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:45:10.435548  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:45:10.465248  260970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:45:10.493131  260970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:45:10.512756  260970 ssh_runner.go:195] Run: openssl version
	I0817 22:45:10.518739  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:45:10.530250  260970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:45:10.535751  260970 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:45:10.535833  260970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:45:10.542007  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:45:10.553194  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:45:10.563564  260970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:45:10.568930  260970 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:45:10.569000  260970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:45:10.575496  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:45:10.587239  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:45:10.598503  260970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:45:10.604091  260970 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:45:10.604154  260970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:45:10.610901  260970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:45:10.622127  260970 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:45:10.627894  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:45:10.634564  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:45:10.641219  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:45:10.647845  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:45:10.654490  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:45:10.661547  260970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:45:10.668253  260970 kubeadm.go:404] StartCluster: {Name:newest-cni-249978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-249
978 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:45:10.668454  260970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:45:10.668523  260970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:45:10.705753  260970 cri.go:89] found id: ""
	I0817 22:45:10.705823  260970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:45:10.715957  260970 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:45:10.715986  260970 kubeadm.go:636] restartCluster start
	I0817 22:45:10.716050  260970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:45:10.726339  260970 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:10.727270  260970 kubeconfig.go:135] verify returned: extract IP: "newest-cni-249978" does not appear in /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:45:10.727755  260970 kubeconfig.go:146] "newest-cni-249978" context is missing from /home/jenkins/minikube-integration/16865-203458/kubeconfig - will repair!
	I0817 22:45:10.728512  260970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:45:10.824646  260970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:45:10.834194  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:10.834267  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:10.846389  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:10.846417  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:10.846479  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:10.858271  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:11.359025  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:11.359131  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:11.372048  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:11.858668  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:11.858773  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:11.871223  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:12.358700  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:12.358781  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:12.370531  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:12.859124  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:12.859235  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:12.873849  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:13.359306  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:13.359401  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:13.371543  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:13.858914  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:13.859029  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:13.872337  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:14.358989  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:14.359077  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:14.373501  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:14.859041  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:14.859129  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:14.871794  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:15.359220  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:15.359323  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:15.371564  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:15.859274  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:15.859399  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:15.872247  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:16.358761  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:16.358873  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:16.371008  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:16.859218  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:16.859339  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:16.871313  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:17.358823  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:17.358923  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:17.371373  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:17.858981  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:17.859105  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:17.871088  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:18.358778  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:18.358864  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:18.370883  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:18.858396  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:18.858480  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:18.870435  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:19.359104  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:19.359197  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:19.371339  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:19.859162  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:19.859262  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:19.871942  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:20.359243  260970 api_server.go:166] Checking apiserver status ...
	I0817 22:45:20.359338  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:45:20.370993  260970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:45:20.834879  260970 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:45:20.834921  260970 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:45:20.834953  260970 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:45:20.835036  260970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:45:20.868583  260970 cri.go:89] found id: ""
	I0817 22:45:20.868673  260970 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:45:20.884949  260970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:45:20.894673  260970 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:45:20.894765  260970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:45:20.904209  260970 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:45:20.904240  260970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:45:21.036394  260970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:45:21.918352  260970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:45:22.144539  260970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:45:22.215129  260970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:45:22.293742  260970 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:45:22.293821  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:45:22.314099  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:45:22.832320  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:45:23.332281  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:45:23.832391  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:45:24.332375  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:45:24.831705  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:45:24.856643  260970 api_server.go:72] duration metric: took 2.562899983s to wait for apiserver process to appear ...
	I0817 22:45:24.856675  260970 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:45:24.856693  260970 api_server.go:253] Checking apiserver healthz at https://192.168.72.79:8443/healthz ...
	I0817 22:45:28.915430  260970 api_server.go:279] https://192.168.72.79:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:45:28.915466  260970 api_server.go:103] status: https://192.168.72.79:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:45:28.915479  260970 api_server.go:253] Checking apiserver healthz at https://192.168.72.79:8443/healthz ...
	I0817 22:45:29.027552  260970 api_server.go:279] https://192.168.72.79:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:45:29.027596  260970 api_server.go:103] status: https://192.168.72.79:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:45:29.528350  260970 api_server.go:253] Checking apiserver healthz at https://192.168.72.79:8443/healthz ...
	I0817 22:45:29.539060  260970 api_server.go:279] https://192.168.72.79:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:45:29.539102  260970 api_server.go:103] status: https://192.168.72.79:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:45:30.028336  260970 api_server.go:253] Checking apiserver healthz at https://192.168.72.79:8443/healthz ...
	I0817 22:45:30.035216  260970 api_server.go:279] https://192.168.72.79:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:45:30.035252  260970 api_server.go:103] status: https://192.168.72.79:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:45:30.527815  260970 api_server.go:253] Checking apiserver healthz at https://192.168.72.79:8443/healthz ...
	I0817 22:45:30.548562  260970 api_server.go:279] https://192.168.72.79:8443/healthz returned 200:
	ok
	I0817 22:45:30.562671  260970 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:45:30.562703  260970 api_server.go:131] duration metric: took 5.706022243s to wait for apiserver health ...
	I0817 22:45:30.562712  260970 cni.go:84] Creating CNI manager for ""
	I0817 22:45:30.562718  260970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:45:30.565248  260970 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:45:30.567280  260970 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:45:30.580982  260970 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:45:30.601009  260970 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:45:30.616020  260970 system_pods.go:59] 9 kube-system pods found
	I0817 22:45:30.616089  260970 system_pods.go:61] "coredns-5dd5756b68-jd7t8" [a1182157-a706-4f2d-908d-19afab8bf263] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:45:30.616105  260970 system_pods.go:61] "coredns-5dd5756b68-pbht5" [001b4ce9-ebb1-467a-8fdc-bf1e015b743e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:45:30.616115  260970 system_pods.go:61] "etcd-newest-cni-249978" [bbe95e60-a324-4d3e-98f0-6dcde30a6b75] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:45:30.616129  260970 system_pods.go:61] "kube-apiserver-newest-cni-249978" [dd869779-b960-4a16-854a-3975888e696f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:45:30.616142  260970 system_pods.go:61] "kube-controller-manager-newest-cni-249978" [9db1b35a-00fc-4927-9016-bdaf8e3156c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:45:30.616155  260970 system_pods.go:61] "kube-proxy-tgmbw" [9cc5a0ba-912a-42c5-bfd4-78e8ca66bece] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:45:30.616166  260970 system_pods.go:61] "kube-scheduler-newest-cni-249978" [3b6d3040-355e-44be-8f83-cf5809930748] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:45:30.616182  260970 system_pods.go:61] "metrics-server-57f55c9bc5-s7jt2" [93d96bda-0eac-4797-8818-abe641de05bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:45:30.616193  260970 system_pods.go:61] "storage-provisioner" [98d94c22-18f4-43bc-a12b-2af341879079] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:45:30.616205  260970 system_pods.go:74] duration metric: took 15.162578ms to wait for pod list to return data ...
	I0817 22:45:30.616217  260970 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:45:30.620926  260970 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:45:30.621002  260970 node_conditions.go:123] node cpu capacity is 2
	I0817 22:45:30.621017  260970 node_conditions.go:105] duration metric: took 4.794256ms to run NodePressure ...
	I0817 22:45:30.621043  260970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:45:30.911120  260970 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:45:30.929621  260970 ops.go:34] apiserver oom_adj: -16
	I0817 22:45:30.929657  260970 kubeadm.go:640] restartCluster took 20.213664103s
	I0817 22:45:30.929668  260970 kubeadm.go:406] StartCluster complete in 20.261432087s
	I0817 22:45:30.929691  260970 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:45:30.929787  260970 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:45:30.931079  260970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:45:30.931415  260970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:45:30.931546  260970 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:45:30.931672  260970 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-249978"
	I0817 22:45:30.931694  260970 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-249978"
	I0817 22:45:30.931718  260970 addons.go:69] Setting default-storageclass=true in profile "newest-cni-249978"
	W0817 22:45:30.931726  260970 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:45:30.931742  260970 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-249978"
	I0817 22:45:30.931793  260970 host.go:66] Checking if "newest-cni-249978" exists ...
	I0817 22:45:30.931820  260970 addons.go:69] Setting metrics-server=true in profile "newest-cni-249978"
	I0817 22:45:30.931843  260970 addons.go:231] Setting addon metrics-server=true in "newest-cni-249978"
	W0817 22:45:30.931852  260970 addons.go:240] addon metrics-server should already be in state true
	I0817 22:45:30.931855  260970 addons.go:69] Setting dashboard=true in profile "newest-cni-249978"
	I0817 22:45:30.931872  260970 addons.go:231] Setting addon dashboard=true in "newest-cni-249978"
	I0817 22:45:30.931695  260970 config.go:182] Loaded profile config "newest-cni-249978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	W0817 22:45:30.931882  260970 addons.go:240] addon dashboard should already be in state true
	I0817 22:45:30.931931  260970 host.go:66] Checking if "newest-cni-249978" exists ...
	I0817 22:45:30.932008  260970 host.go:66] Checking if "newest-cni-249978" exists ...
	I0817 22:45:30.932196  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:45:30.932217  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:45:30.932235  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:45:30.932253  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:45:30.932333  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:45:30.932352  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:45:30.932366  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:45:30.932373  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:45:30.949411  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35077
	I0817 22:45:30.949656  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33009
	I0817 22:45:30.949869  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:45:30.950004  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:45:30.950586  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:45:30.950615  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:45:30.950743  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45701
	I0817 22:45:30.950901  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0817 22:45:30.951005  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:45:30.951103  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:45:30.951128  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:45:30.951152  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:45:30.951585  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:45:30.951602  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:45:30.951637  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:45:30.951650  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:45:30.951675  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:45:30.951699  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:45:30.951906  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:45:30.952076  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetState
	I0817 22:45:30.952238  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:45:30.952343  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:45:30.952367  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:45:30.952388  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:45:30.952812  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:45:30.953443  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:45:30.953493  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:45:30.969079  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0817 22:45:30.969632  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:45:30.970305  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:45:30.970331  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:45:30.970788  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:45:30.970994  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetState
	I0817 22:45:30.973155  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:30.976355  260970 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0817 22:45:30.973787  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42229
	I0817 22:45:30.976221  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0817 22:45:30.980275  260970 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0817 22:45:30.978882  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:45:30.979162  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:45:30.982127  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 22:45:30.982150  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 22:45:30.982178  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:30.982703  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:45:30.982727  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:45:30.982845  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:45:30.982862  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:45:30.983109  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:45:30.983387  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetState
	I0817 22:45:30.983448  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:45:30.983746  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetState
	I0817 22:45:30.985397  260970 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-249978" context rescaled to 1 replicas
	I0817 22:45:30.985437  260970 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:45:30.987603  260970 out.go:177] * Verifying Kubernetes components...
	I0817 22:45:30.986124  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:30.986578  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:30.987137  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:30.987821  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:30.989492  260970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:45:30.989575  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:30.989620  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:30.994145  260970 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:45:30.990154  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:30.993012  260970 addons.go:231] Setting addon default-storageclass=true in "newest-cni-249978"
	I0817 22:45:30.996272  260970 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:45:30.996592  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	W0817 22:45:30.998009  260970 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:45:30.998045  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:45:30.998067  260970 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:45:31.000248  260970 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:45:30.998080  260970 host.go:66] Checking if "newest-cni-249978" exists ...
	I0817 22:45:30.998098  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:30.998299  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:31.000365  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:45:31.000437  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:31.000718  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:45:31.000776  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:45:31.005704  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:31.005975  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:31.006317  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:31.006349  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:31.006607  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:31.006704  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:31.006726  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:31.006868  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:31.006925  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:31.006997  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:31.007151  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:31.007168  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:31.007336  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:31.007490  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:31.019345  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0817 22:45:31.019856  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:45:31.020422  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:45:31.020452  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:45:31.020823  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:45:31.021315  260970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:45:31.021359  260970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:45:31.039821  260970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0817 22:45:31.040367  260970 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:45:31.040924  260970 main.go:141] libmachine: Using API Version  1
	I0817 22:45:31.040954  260970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:45:31.041339  260970 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:45:31.041521  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetState
	I0817 22:45:31.043232  260970 main.go:141] libmachine: (newest-cni-249978) Calling .DriverName
	I0817 22:45:31.043501  260970 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:45:31.043518  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:45:31.043535  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHHostname
	I0817 22:45:31.046346  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:31.046792  260970 main.go:141] libmachine: (newest-cni-249978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:0c:ac", ip: ""} in network mk-newest-cni-249978: {Iface:virbr4 ExpiryTime:2023-08-17 23:44:56 +0000 UTC Type:0 Mac:52:54:00:88:0c:ac Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:newest-cni-249978 Clientid:01:52:54:00:88:0c:ac}
	I0817 22:45:31.046844  260970 main.go:141] libmachine: (newest-cni-249978) DBG | domain newest-cni-249978 has defined IP address 192.168.72.79 and MAC address 52:54:00:88:0c:ac in network mk-newest-cni-249978
	I0817 22:45:31.047002  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHPort
	I0817 22:45:31.047235  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHKeyPath
	I0817 22:45:31.047522  260970 main.go:141] libmachine: (newest-cni-249978) Calling .GetSSHUsername
	I0817 22:45:31.047694  260970 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/newest-cni-249978/id_rsa Username:docker}
	I0817 22:45:31.211089  260970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:45:31.258110  260970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:45:31.261057  260970 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 22:45:31.261085  260970 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:45:31.261154  260970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:45:31.266876  260970 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:45:31.266907  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:45:31.267526  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 22:45:31.267552  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 22:45:31.319779  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 22:45:31.319814  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 22:45:31.320652  260970 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:45:31.320679  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:45:31.364216  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 22:45:31.364256  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 22:45:31.386146  260970 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:45:31.386179  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:45:31.404709  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 22:45:31.404739  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0817 22:45:31.433216  260970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:45:31.500619  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 22:45:31.500647  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 22:45:31.583540  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 22:45:31.583574  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 22:45:31.680107  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 22:45:31.680139  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 22:45:31.738676  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 22:45:31.738702  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 22:45:31.785088  260970 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 22:45:31.785115  260970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 22:45:31.820888  260970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 22:45:33.617389  260970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.359222298s)
	I0817 22:45:33.617440  260970 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.35625978s)
	I0817 22:45:33.617453  260970 main.go:141] libmachine: Making call to close driver server
	I0817 22:45:33.617468  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Close
	I0817 22:45:33.617469  260970 api_server.go:72] duration metric: took 2.632001771s to wait for apiserver process to appear ...
	I0817 22:45:33.617477  260970 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:45:33.617498  260970 api_server.go:253] Checking apiserver healthz at https://192.168.72.79:8443/healthz ...
	I0817 22:45:33.617826  260970 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:45:33.617852  260970 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:45:33.617863  260970 main.go:141] libmachine: Making call to close driver server
	I0817 22:45:33.617875  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Close
	I0817 22:45:33.617890  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Closing plugin on server side
	I0817 22:45:33.618205  260970 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:45:33.618225  260970 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:45:33.618240  260970 main.go:141] libmachine: Making call to close driver server
	I0817 22:45:33.618255  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Close
	I0817 22:45:33.618538  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Closing plugin on server side
	I0817 22:45:33.618574  260970 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:45:33.618591  260970 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:45:33.620222  260970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.409093844s)
	I0817 22:45:33.620261  260970 main.go:141] libmachine: Making call to close driver server
	I0817 22:45:33.620276  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Close
	I0817 22:45:33.620509  260970 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:45:33.620525  260970 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:45:33.620534  260970 main.go:141] libmachine: Making call to close driver server
	I0817 22:45:33.620542  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Close
	I0817 22:45:33.620777  260970 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:45:33.620795  260970 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:45:33.632812  260970 api_server.go:279] https://192.168.72.79:8443/healthz returned 200:
	ok
	I0817 22:45:33.635010  260970 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:45:33.635041  260970 api_server.go:131] duration metric: took 17.557031ms to wait for apiserver health ...
	I0817 22:45:33.635050  260970 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:45:33.643200  260970 system_pods.go:59] 9 kube-system pods found
	I0817 22:45:33.643240  260970 system_pods.go:61] "coredns-5dd5756b68-jd7t8" [a1182157-a706-4f2d-908d-19afab8bf263] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:45:33.643249  260970 system_pods.go:61] "coredns-5dd5756b68-pbht5" [001b4ce9-ebb1-467a-8fdc-bf1e015b743e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:45:33.643257  260970 system_pods.go:61] "etcd-newest-cni-249978" [bbe95e60-a324-4d3e-98f0-6dcde30a6b75] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:45:33.643265  260970 system_pods.go:61] "kube-apiserver-newest-cni-249978" [dd869779-b960-4a16-854a-3975888e696f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:45:33.643271  260970 system_pods.go:61] "kube-controller-manager-newest-cni-249978" [9db1b35a-00fc-4927-9016-bdaf8e3156c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:45:33.643276  260970 system_pods.go:61] "kube-proxy-tgmbw" [9cc5a0ba-912a-42c5-bfd4-78e8ca66bece] Running
	I0817 22:45:33.643282  260970 system_pods.go:61] "kube-scheduler-newest-cni-249978" [3b6d3040-355e-44be-8f83-cf5809930748] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:45:33.643292  260970 system_pods.go:61] "metrics-server-57f55c9bc5-s7jt2" [93d96bda-0eac-4797-8818-abe641de05bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:45:33.643296  260970 system_pods.go:61] "storage-provisioner" [98d94c22-18f4-43bc-a12b-2af341879079] Running
	I0817 22:45:33.643303  260970 system_pods.go:74] duration metric: took 8.24776ms to wait for pod list to return data ...
	I0817 22:45:33.643312  260970 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:45:33.652127  260970 default_sa.go:45] found service account: "default"
	I0817 22:45:33.652159  260970 default_sa.go:55] duration metric: took 8.841852ms for default service account to be created ...
	I0817 22:45:33.652169  260970 kubeadm.go:581] duration metric: took 2.666703236s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0817 22:45:33.652185  260970 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:45:33.667407  260970 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:45:33.667439  260970 node_conditions.go:123] node cpu capacity is 2
	I0817 22:45:33.667449  260970 node_conditions.go:105] duration metric: took 15.259809ms to run NodePressure ...
	I0817 22:45:33.667461  260970 start.go:228] waiting for startup goroutines ...
	I0817 22:45:33.813272  260970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.379991896s)
	I0817 22:45:33.813353  260970 main.go:141] libmachine: Making call to close driver server
	I0817 22:45:33.813374  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Close
	I0817 22:45:33.813721  260970 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:45:33.813748  260970 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:45:33.813759  260970 main.go:141] libmachine: Making call to close driver server
	I0817 22:45:33.813772  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Close
	I0817 22:45:33.814037  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Closing plugin on server side
	I0817 22:45:33.814097  260970 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:45:33.814122  260970 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:45:33.814133  260970 addons.go:467] Verifying addon metrics-server=true in "newest-cni-249978"
	I0817 22:45:34.652352  260970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.831393562s)
	I0817 22:45:34.652430  260970 main.go:141] libmachine: Making call to close driver server
	I0817 22:45:34.652449  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Close
	I0817 22:45:34.652834  260970 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:45:34.652855  260970 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:45:34.652877  260970 main.go:141] libmachine: Making call to close driver server
	I0817 22:45:34.652901  260970 main.go:141] libmachine: (newest-cni-249978) Calling .Close
	I0817 22:45:34.653198  260970 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:45:34.653252  260970 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:45:34.653259  260970 main.go:141] libmachine: (newest-cni-249978) DBG | Closing plugin on server side
	I0817 22:45:34.655969  260970 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-249978 addons enable metrics-server	
	
	
	I0817 22:45:34.658503  260970 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0817 22:45:34.660603  260970 addons.go:502] enable addons completed in 3.729060346s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0817 22:45:34.660667  260970 start.go:233] waiting for cluster config update ...
	I0817 22:45:34.660684  260970 start.go:242] writing updated cluster config ...
	I0817 22:45:34.661143  260970 ssh_runner.go:195] Run: rm -f paused
	I0817 22:45:34.752471  260970 start.go:600] kubectl: 1.28.0, cluster: 1.28.0-rc.1 (minor skew: 0)
	I0817 22:45:34.754781  260970 out.go:177] * Done! kubectl is now configured to use "newest-cni-249978" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 22:24:46 UTC, ends at Thu 2023-08-17 22:45:52 UTC. --
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.572240766Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e34eda232b71c0bb3994f148bcbfc5dd3e2f515f7ad57682606a238903618042,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-lw5bp,Uid:b197e3ce-ee02-467c-b87f-de8bc2b6802f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311439118354255,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-lw5bp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b197e3ce-ee02-467c-b87f-de8bc2b6802f,k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:30:38.770871656Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:02b2bd5a-9e11-4476-81c5-fe927c4
ef543,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311438980352578,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\
",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-17T22:30:38.638276524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-2gh8n,Uid:44728d42-fce0-4a11-ba30-094a44b9313a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311436917954549,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:30:36.583989213Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&PodSandboxMetadata{Name:kube-proxy-k2jz7,Uid:1fedb8b2-1800-4933-
b964-6080cc760045,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311436503766657,Labels:map[string]string{controller-revision-hash: 86cc8bcbf7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:30:35.862745498Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-321287,Uid:04e945a80216f830497b31b89421c70e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311412706332244,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 04e945a80216f830497b31b89421c70e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 04e945a80216f830497b31b89421c70e,kubernetes.io/config.seen: 2023-08-17T22:30:12.141231664Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-321287,Uid:6e7412a207e7573fc22d8c2b5f5da127,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311412693857262,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6e7412a207e7573fc22d8c2b5f5da127,kubernetes.io/config.seen: 2023-08-17T22:30:12.141230479Z,kubernetes.io/config.source: file,},Run
timeHandler:,},&PodSandbox{Id:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-321287,Uid:73d361ab4418927a569781cffbcb19c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311412660002301,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.30:8444,kubernetes.io/config.hash: 73d361ab4418927a569781cffbcb19c0,kubernetes.io/config.seen: 2023-08-17T22:30:12.141229034Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-321287,Uid:da775db6f21c1f41aa6b
992356315d15,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311412631309697,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41aa6b992356315d15,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.30:2379,kubernetes.io/config.hash: da775db6f21c1f41aa6b992356315d15,kubernetes.io/config.seen: 2023-08-17T22:30:12.141222774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=ce64dabe-ed52-4c3d-892a-6237f95332c9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.573067798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=34d3a37c-5f65-4a9d-b457-b42c49f2a91f name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.573150794Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=34d3a37c-5f65-4a9d-b457-b42c49f2a91f name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.573331895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=34d3a37c-5f65-4a9d-b457-b42c49f2a91f name=/runtime.v1.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.588000491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3a7bb0cb-1992-412a-bd95-5bc2a47fb186 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.588089209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3a7bb0cb-1992-412a-bd95-5bc2a47fb186 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.588284800Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3a7bb0cb-1992-412a-bd95-5bc2a47fb186 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.624642492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7595d4de-3293-467d-9225-46c3445a1d64 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.624730666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7595d4de-3293-467d-9225-46c3445a1d64 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.624914391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7595d4de-3293-467d-9225-46c3445a1d64 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.661598218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=84c14b7b-13b8-4c66-956e-356b5d899ed9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.661686625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=84c14b7b-13b8-4c66-956e-356b5d899ed9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.661863986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=84c14b7b-13b8-4c66-956e-356b5d899ed9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.701977104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=83d18626-d214-46af-baa3-50c9d5c84b74 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.702042144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=83d18626-d214-46af-baa3-50c9d5c84b74 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.702216329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=83d18626-d214-46af-baa3-50c9d5c84b74 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.737769681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d3c65a29-c72c-4d0b-9d6e-18f1c9303fd8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.737864274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d3c65a29-c72c-4d0b-9d6e-18f1c9303fd8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.738048077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d3c65a29-c72c-4d0b-9d6e-18f1c9303fd8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.774094696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b7dac00b-7b88-411f-ae20-0ca579ce5e0d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.774187250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b7dac00b-7b88-411f-ae20-0ca579ce5e0d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.774426310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b7dac00b-7b88-411f-ae20-0ca579ce5e0d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.809536175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e1cfbb0f-a80c-4b28-8f40-8445a5801637 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.809725672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e1cfbb0f-a80c-4b28-8f40-8445a5801637 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:45:52 default-k8s-diff-port-321287 crio[726]: time="2023-08-17 22:45:52.809893707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41,PodSandboxId:d6f816a6adc86afb8cc877fb8924cf6b7423f8818d91a99313069a905a9cba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311440490398538,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b2bd5a-9e11-4476-81c5-fe927c4ef543,},Annotations:map[string]string{io.kubernetes.container.hash: 37a79b1a,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78,PodSandboxId:f825ccdf72b0ccbf503c4d1956d3433441cb913b76872d9664c21224ed6824f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf,State:CONTAINER_RUNNING,CreatedAt:1692311440208094009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2jz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fedb8b2-1800-4933-b964-6080cc760045,},Annotations:map[string]string{io.kubernetes.container.hash: f0ff81fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861,PodSandboxId:94f102d81ed5eef87d07c05c3c1ca267d7cfe0b788a90bed722cb090fb709208,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1692311439649669938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2gh8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44728d42-fce0-4a11-ba30-094a44b9313a,},Annotations:map[string]string{io.kubernetes.container.hash: 4576abcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6,PodSandboxId:a5b502f330514fdab9571428f2dca3ad958b388309bd2217788748b180a177f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1692311414027200339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da775db6f21c1f41a
a6b992356315d15,},Annotations:map[string]string{io.kubernetes.container.hash: bb0c5b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b,PodSandboxId:3ad675abb698b1b26fab2c73f8b52b954de4403dae656c3231614c7e85a826bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265,State:CONTAINER_RUNNING,CreatedAt:1692311413532388559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 6e7412a207e7573fc22d8c2b5f5da127,},Annotations:map[string]string{io.kubernetes.container.hash: e797b7a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f,PodSandboxId:27d0d2a52392141dfdcc5832261a0f96100c897a09e0a4f75a5dfeeb3adf4544,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d,State:CONTAINER_RUNNING,CreatedAt:1692311413387693705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 73d361ab4418927a569781cffbcb19c0,},Annotations:map[string]string{io.kubernetes.container.hash: 13688c9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c,PodSandboxId:009245a17138c011b936eee325a128e3ba44c2e8d9ade55bde238bb6963a82b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af,State:CONTAINER_RUNNING,CreatedAt:1692311413224654057,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-321287,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 04e945a80216f830497b31b89421c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 373e41ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e1cfbb0f-a80c-4b28-8f40-8445a5801637 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	9fd26bcc5bfe4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   d6f816a6adc86
	5bdba67d69f89       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   15 minutes ago      Running             kube-proxy                0                   f825ccdf72b0c
	7403ecb81788c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   94f102d81ed5e
	5d3f4cfe29dcc       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   15 minutes ago      Running             etcd                      2                   a5b502f330514
	0767cda0efa92       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   15 minutes ago      Running             kube-controller-manager   2                   3ad675abb698b
	6c82fbf22edcc       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   15 minutes ago      Running             kube-apiserver            2                   27d0d2a523921
	fd04443a08b3d       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   15 minutes ago      Running             kube-scheduler            2                   009245a17138c
	
	* 
	* ==> coredns [7403ecb81788c3fd9fc15e415c4ded4859eff50be0f46216371b8335063df861] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-321287
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-321287
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=default-k8s-diff-port-321287
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T22_30_22_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 22:30:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-321287
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 22:45:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 22:40:55 +0000   Thu, 17 Aug 2023 22:30:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 22:40:55 +0000   Thu, 17 Aug 2023 22:30:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 22:40:55 +0000   Thu, 17 Aug 2023 22:30:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 22:40:55 +0000   Thu, 17 Aug 2023 22:30:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.30
	  Hostname:    default-k8s-diff-port-321287
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b18b3c66a6b4dd9966f408987de00b0
	  System UUID:                1b18b3c6-6a6b-4dd9-966f-408987de00b0
	  Boot ID:                    deb09338-68db-4e09-8863-8f7556e89e91
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-2gh8n                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-321287                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-321287             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-321287    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-k2jz7                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-321287             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-74d5c6b9c-lw5bp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node default-k8s-diff-port-321287 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node default-k8s-diff-port-321287 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node default-k8s-diff-port-321287 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-321287 event: Registered Node default-k8s-diff-port-321287 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug17 22:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075948] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.441486] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.530699] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148514] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.633030] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.079563] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.128876] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.183083] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.133846] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.267796] systemd-fstab-generator[710]: Ignoring "noauto" for root device
	[Aug17 22:25] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +20.819164] kauditd_printk_skb: 29 callbacks suppressed
	[Aug17 22:30] systemd-fstab-generator[3559]: Ignoring "noauto" for root device
	[ +10.386451] systemd-fstab-generator[3885]: Ignoring "noauto" for root device
	[ +27.877441] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [5d3f4cfe29dccf2ac2d57556deb6f23197e8a4d9796a1ac3d473221d4aacb1c6] <==
	* {"level":"info","ts":"2023-08-17T22:30:16.413Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-08-17T22:30:36.330Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.063098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:30:36.331Z","caller":"traceutil/trace.go:171","msg":"trace[2114995631] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:369; }","duration":"162.478584ms","start":"2023-08-17T22:30:36.169Z","end":"2023-08-17T22:30:36.331Z","steps":["trace[2114995631] 'agreement among raft nodes before linearized reading'  (duration: 114.727223ms)","trace[2114995631] 'range keys from in-memory index tree'  (duration: 42.246105ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T22:30:36.331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.028791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-321287\" ","response":"range_response_count:1 size:5758"}
	{"level":"info","ts":"2023-08-17T22:30:36.332Z","caller":"traceutil/trace.go:171","msg":"trace[1053105977] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-321287; range_end:; response_count:1; response_revision:369; }","duration":"147.084413ms","start":"2023-08-17T22:30:36.184Z","end":"2023-08-17T22:30:36.332Z","steps":["trace[1053105977] 'agreement among raft nodes before linearized reading'  (duration: 99.871031ms)","trace[1053105977] 'range keys from in-memory index tree'  (duration: 46.955829ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T22:40:16.516Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2023-08-17T22:40:16.519Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":723,"took":"2.045503ms","hash":2144177133}
	{"level":"info","ts":"2023-08-17T22:40:16.519Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2144177133,"revision":723,"compact-revision":-1}
	{"level":"warn","ts":"2023-08-17T22:44:00.216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.039168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:44:00.216Z","caller":"traceutil/trace.go:171","msg":"trace[810372390] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1148; }","duration":"109.57493ms","start":"2023-08-17T22:44:00.106Z","end":"2023-08-17T22:44:00.216Z","steps":["trace[810372390] 'range keys from in-memory index tree'  (duration: 108.883018ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:44:00.740Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.009588ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4429723465241359676 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.30\" mod_revision:1141 > success:<request_put:<key:\"/registry/masterleases/192.168.50.30\" value_size:66 lease:4429723465241359674 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.30\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-08-17T22:44:00.740Z","caller":"traceutil/trace.go:171","msg":"trace[1843633028] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"257.053469ms","start":"2023-08-17T22:44:00.483Z","end":"2023-08-17T22:44:00.740Z","steps":["trace[1843633028] 'process raft request'  (duration: 129.626872ms)","trace[1843633028] 'compare'  (duration: 125.801429ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T22:45:10.947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.055532ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4429723465241360020 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.30\" mod_revision:1196 > success:<request_put:<key:\"/registry/masterleases/192.168.50.30\" value_size:66 lease:4429723465241360017 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.30\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-08-17T22:45:10.947Z","caller":"traceutil/trace.go:171","msg":"trace[665136838] transaction","detail":"{read_only:false; response_revision:1204; number_of_response:1; }","duration":"509.661762ms","start":"2023-08-17T22:45:10.437Z","end":"2023-08-17T22:45:10.947Z","steps":["trace[665136838] 'process raft request'  (duration: 509.571307ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:45:10.947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:45:10.437Z","time spent":"509.739021ms","remote":"127.0.0.1:48664","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1202 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-08-17T22:45:10.947Z","caller":"traceutil/trace.go:171","msg":"trace[300648675] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"515.52635ms","start":"2023-08-17T22:45:10.432Z","end":"2023-08-17T22:45:10.947Z","steps":["trace[300648675] 'process raft request'  (duration: 127.685212ms)","trace[300648675] 'compare'  (duration: 386.77698ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T22:45:10.947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:45:10.432Z","time spent":"515.644506ms","remote":"127.0.0.1:48644","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.30\" mod_revision:1196 > success:<request_put:<key:\"/registry/masterleases/192.168.50.30\" value_size:66 lease:4429723465241360017 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.30\" > >"}
	{"level":"warn","ts":"2023-08-17T22:45:11.414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.01364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:45:11.414Z","caller":"traceutil/trace.go:171","msg":"trace[1920233435] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1204; }","duration":"308.186192ms","start":"2023-08-17T22:45:11.106Z","end":"2023-08-17T22:45:11.414Z","steps":["trace[1920233435] 'range keys from in-memory index tree'  (duration: 307.558059ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T22:45:11.414Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-17T22:45:11.106Z","time spent":"308.423348ms","remote":"127.0.0.1:48668","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2023-08-17T22:45:11.414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.332568ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T22:45:11.414Z","caller":"traceutil/trace.go:171","msg":"trace[392239755] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1204; }","duration":"304.873865ms","start":"2023-08-17T22:45:11.110Z","end":"2023-08-17T22:45:11.414Z","steps":["trace[392239755] 'range keys from in-memory index tree'  (duration: 304.312603ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T22:45:16.524Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2023-08-17T22:45:16.527Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":966,"took":"1.832791ms","hash":2843340825}
	{"level":"info","ts":"2023-08-17T22:45:16.527Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2843340825,"revision":966,"compact-revision":723}
	
	* 
	* ==> kernel <==
	*  22:45:53 up 21 min,  0 users,  load average: 0.35, 0.34, 0.33
	Linux default-k8s-diff-port-321287 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6c82fbf22edcc2b8d042a8e8fa6d176b7166ee3d07cdca3a52fd8ee79fed4d6f] <==
	* W0817 22:43:19.320759       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:43:19.320866       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:43:19.320881       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:44:18.198754       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.243.8:443: connect: connection refused
	I0817 22:44:18.198827       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0817 22:45:10.948712       1 trace.go:219] Trace[1396445691]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.30,type:*v1.Endpoints,resource:apiServerIPInfo (17-Aug-2023 22:45:10.365) (total time: 582ms):
	Trace[1396445691]: ---"Transaction prepared" 64ms (22:45:10.431)
	Trace[1396445691]: ---"Txn call completed" 517ms (22:45:10.948)
	Trace[1396445691]: [582.943555ms] [582.943555ms] END
	I0817 22:45:10.949140       1 trace.go:219] Trace[65756196]: "Update" accept:application/json, */*,audit-id:60c68022-a7ef-4549-9885-6b712193f105,client:192.168.50.30,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (17-Aug-2023 22:45:10.435) (total time: 513ms):
	Trace[65756196]: ["GuaranteedUpdate etcd3" audit-id:60c68022-a7ef-4549-9885-6b712193f105,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 513ms (22:45:10.435)
	Trace[65756196]:  ---"Txn call completed" 512ms (22:45:10.948)]
	Trace[65756196]: [513.716429ms] [513.716429ms] END
	I0817 22:45:18.199173       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.243.8:443: connect: connection refused
	I0817 22:45:18.199355       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0817 22:45:18.318894       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.243.8:443: connect: connection refused
	I0817 22:45:18.318991       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0817 22:45:19.318703       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:45:19.318881       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0817 22:45:19.318710       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 22:45:19.319011       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0817 22:45:19.318918       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:45:19.320465       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0767cda0efa92ed7bea03c9baecb20d26af2b94df0e88e5eefa1f30e27347b6b] <==
	* W0817 22:39:35.885185       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:40:05.375624       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:40:05.895696       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:40:35.383051       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:40:35.906632       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:41:05.389369       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:41:05.916033       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:41:35.399159       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:41:35.929684       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:42:05.405021       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:42:05.940989       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:42:35.412816       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:42:35.950455       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:43:05.420181       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:43:05.960455       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:43:35.429464       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:43:35.971906       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:44:05.437015       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:44:05.982493       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:44:35.443980       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:44:35.992246       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:45:05.453223       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:45:06.005314       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0817 22:45:35.461483       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0817 22:45:36.022253       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [5bdba67d69f8911550b675837ccc94248b55cd2f8ea321862bfac8142d9f2b78] <==
	* I0817 22:30:40.520516       1 node.go:141] Successfully retrieved node IP: 192.168.50.30
	I0817 22:30:40.520787       1 server_others.go:110] "Detected node IP" address="192.168.50.30"
	I0817 22:30:40.520829       1 server_others.go:554] "Using iptables proxy"
	I0817 22:30:40.629748       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0817 22:30:40.629837       1 server_others.go:192] "Using iptables Proxier"
	I0817 22:30:40.629911       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 22:30:40.631106       1 server.go:658] "Version info" version="v1.27.4"
	I0817 22:30:40.631161       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 22:30:40.639782       1 config.go:188] "Starting service config controller"
	I0817 22:30:40.639841       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 22:30:40.639897       1 config.go:97] "Starting endpoint slice config controller"
	I0817 22:30:40.639914       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 22:30:40.641105       1 config.go:315] "Starting node config controller"
	I0817 22:30:40.641149       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 22:30:40.739910       1 shared_informer.go:318] Caches are synced for service config
	I0817 22:30:40.740015       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 22:30:40.742681       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [fd04443a08b3dc6de9c1b993f28d4cbbf812eaf77b10239fc6588ed3e0ec418c] <==
	* W0817 22:30:19.260410       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 22:30:19.260507       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0817 22:30:19.273750       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:19.273848       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0817 22:30:19.317695       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 22:30:19.317795       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0817 22:30:19.330255       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 22:30:19.330332       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0817 22:30:19.342390       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 22:30:19.342452       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0817 22:30:19.492879       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:19.492933       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0817 22:30:19.529058       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 22:30:19.529113       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0817 22:30:19.567792       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:19.567867       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0817 22:30:19.699217       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 22:30:19.699273       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0817 22:30:19.749246       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 22:30:19.749392       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0817 22:30:19.786010       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 22:30:19.786098       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0817 22:30:19.794357       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 22:30:19.794455       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 22:30:21.903111       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 22:24:46 UTC, ends at Thu 2023-08-17 22:45:53 UTC. --
	Aug 17 22:43:22 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:43:22.655194    3892 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:43:22 default-k8s-diff-port-321287 kubelet[3892]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:43:22 default-k8s-diff-port-321287 kubelet[3892]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:43:22 default-k8s-diff-port-321287 kubelet[3892]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:43:23 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:43:23.573100    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:43:36 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:43:36.576445    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:43:47 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:43:47.572120    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:44:00 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:44:00.573822    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:44:11 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:44:11.573215    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:44:22 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:44:22.655356    3892 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:44:22 default-k8s-diff-port-321287 kubelet[3892]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:44:22 default-k8s-diff-port-321287 kubelet[3892]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:44:22 default-k8s-diff-port-321287 kubelet[3892]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:44:26 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:44:26.573311    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:44:37 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:44:37.572905    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:44:51 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:44:51.572803    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:45:05 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:45:05.573851    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:45:17 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:45:17.572538    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:45:22 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:45:22.655090    3892 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 17 22:45:22 default-k8s-diff-port-321287 kubelet[3892]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 17 22:45:22 default-k8s-diff-port-321287 kubelet[3892]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 17 22:45:22 default-k8s-diff-port-321287 kubelet[3892]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 17 22:45:22 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:45:22.709880    3892 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Aug 17 22:45:30 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:45:30.574135    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	Aug 17 22:45:44 default-k8s-diff-port-321287 kubelet[3892]: E0817 22:45:44.572706    3892 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-lw5bp" podUID=b197e3ce-ee02-467c-b87f-de8bc2b6802f
	
	* 
	* ==> storage-provisioner [9fd26bcc5bfe4e9bb4d4b09a49c2dc1cfa6973a304cca2f7815ddf3c83018a41] <==
	* I0817 22:30:40.634482       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 22:30:40.662159       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 22:30:40.663028       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 22:30:40.696318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 22:30:40.698116       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-321287_1a13d18e-b8eb-4dab-8860-b65ca51cff07!
	I0817 22:30:40.705200       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"369f6126-d0c1-4c9f-b15f-d77f0f393dd4", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-321287_1a13d18e-b8eb-4dab-8860-b65ca51cff07 became leader
	I0817 22:30:40.810215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-321287_1a13d18e-b8eb-4dab-8860-b65ca51cff07!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-321287 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-lw5bp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-321287 describe pod metrics-server-74d5c6b9c-lw5bp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-321287 describe pod metrics-server-74d5c6b9c-lw5bp: exit status 1 (64.810999ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-lw5bp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-321287 describe pod metrics-server-74d5c6b9c-lw5bp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (368.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (111.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0817 22:42:07.552892  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 22:42:14.045061  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:42:56.109822  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:43:09.343935  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-294781 -n old-k8s-version-294781
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-08-17 22:43:24.973033789 +0000 UTC m=+5578.613801511
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-294781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-294781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.025µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-294781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-294781 -n old-k8s-version-294781
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-294781 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-294781 logs -n 25: (1.666766599s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-975779 sudo cat                              | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo                                  | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo find                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-975779 sudo crio                             | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-975779                                       | bridge-975779                | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-340676 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:15 UTC |
	|         | disable-driver-mounts-340676                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:15 UTC | 17 Aug 23 22:17 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-294781        | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-525875             | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC | 17 Aug 23 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-437183            | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-321287  | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC | 17 Aug 23 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:17 UTC |                     |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-294781             | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-525875                  | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-294781                              | old-k8s-version-294781       | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-525875                                   | no-preload-525875            | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:29 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-437183                 | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-437183                                  | embed-certs-437183           | jenkins | v1.31.2 | 17 Aug 23 22:19 UTC | 17 Aug 23 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-321287       | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-321287 | jenkins | v1.31.2 | 17 Aug 23 22:20 UTC | 17 Aug 23 22:30 UTC |
	|         | default-k8s-diff-port-321287                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 22:20:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 22:20:16.712686  255491 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:20:16.712825  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.712835  255491 out.go:309] Setting ErrFile to fd 2...
	I0817 22:20:16.712839  255491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:20:16.713062  255491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:20:16.713667  255491 out.go:303] Setting JSON to false
	I0817 22:20:16.714624  255491 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25342,"bootTime":1692285475,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:20:16.714682  255491 start.go:138] virtualization: kvm guest
	I0817 22:20:16.717535  255491 out.go:177] * [default-k8s-diff-port-321287] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:20:16.719151  255491 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:20:16.720536  255491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:20:16.719158  255491 notify.go:220] Checking for updates...
	I0817 22:20:16.724470  255491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:20:16.726182  255491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:20:16.727902  255491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:20:16.729516  255491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:20:16.731373  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:20:16.731749  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.731825  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.746961  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0817 22:20:16.747404  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.748088  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.748116  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.748449  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.748618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.748847  255491 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:20:16.749194  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:20:16.749239  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:20:16.764882  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I0817 22:20:16.765357  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:20:16.765874  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:20:16.765901  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:20:16.766289  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:20:16.766480  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:20:16.802457  255491 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 22:20:16.804215  255491 start.go:298] selected driver: kvm2
	I0817 22:20:16.804235  255491 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 Cl
usterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.804379  255491 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:20:16.805157  255491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.805248  255491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 22:20:16.821166  255491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 22:20:16.821564  255491 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 22:20:16.821606  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:20:16.821619  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:20:16.821631  255491 start_flags.go:319] config:
	{Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:20:16.821815  255491 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 22:20:16.823863  255491 out.go:177] * Starting control plane node default-k8s-diff-port-321287 in cluster default-k8s-diff-port-321287
	I0817 22:20:16.825296  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:20:16.825350  255491 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 22:20:16.825365  255491 cache.go:57] Caching tarball of preloaded images
	I0817 22:20:16.825521  255491 preload.go:174] Found /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 22:20:16.825536  255491 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 22:20:16.825660  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:20:16.825870  255491 start.go:365] acquiring machines lock for default-k8s-diff-port-321287: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:20:17.790384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:20.862432  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:26.942301  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:30.014393  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:36.094411  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:39.166376  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:45.246382  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:48.318418  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:54.398388  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:20:57.470394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:03.550380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:06.622365  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:12.702351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:15.774370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:21.854413  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:24.926351  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:31.006415  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:34.078332  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:40.158437  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:43.230410  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:49.310359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:52.382386  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:21:58.462394  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:01.534395  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:07.614359  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:10.686384  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:16.766363  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:19.838352  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:25.918380  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:28.990416  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:35.070383  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:38.142364  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:44.222341  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:47.294387  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:53.374378  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:22:56.446375  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:02.526335  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:05.598406  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:11.678435  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:14.750370  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:20.830484  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:23.902346  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:29.982456  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:33.054379  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:39.134436  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:42.206472  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:48.286396  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:51.358348  254975 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.56:22: connect: no route to host
	I0817 22:23:54.362645  255057 start.go:369] acquired machines lock for "no-preload-525875" in 4m31.301140971s
	I0817 22:23:54.362883  255057 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:23:54.362929  255057 fix.go:54] fixHost starting: 
	I0817 22:23:54.363423  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:23:54.363467  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:23:54.379127  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0817 22:23:54.379699  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:23:54.380334  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:23:54.380357  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:23:54.380797  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:23:54.381004  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:23:54.381209  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:23:54.383099  255057 fix.go:102] recreateIfNeeded on no-preload-525875: state=Stopped err=<nil>
	I0817 22:23:54.383145  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	W0817 22:23:54.383332  255057 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:23:54.385187  255057 out.go:177] * Restarting existing kvm2 VM for "no-preload-525875" ...
	I0817 22:23:54.360325  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:23:54.360394  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:23:54.362467  254975 machine.go:91] provisioned docker machine in 4m37.411699893s
	I0817 22:23:54.362520  254975 fix.go:56] fixHost completed within 4m37.434281244s
	I0817 22:23:54.362529  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 4m37.434304432s
	W0817 22:23:54.362577  254975 start.go:672] error starting host: provision: host is not running
	W0817 22:23:54.363017  254975 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0817 22:23:54.363033  254975 start.go:687] Will try again in 5 seconds ...
	I0817 22:23:54.386615  255057 main.go:141] libmachine: (no-preload-525875) Calling .Start
	I0817 22:23:54.386791  255057 main.go:141] libmachine: (no-preload-525875) Ensuring networks are active...
	I0817 22:23:54.387647  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network default is active
	I0817 22:23:54.387973  255057 main.go:141] libmachine: (no-preload-525875) Ensuring network mk-no-preload-525875 is active
	I0817 22:23:54.388332  255057 main.go:141] libmachine: (no-preload-525875) Getting domain xml...
	I0817 22:23:54.389183  255057 main.go:141] libmachine: (no-preload-525875) Creating domain...
	I0817 22:23:55.639391  255057 main.go:141] libmachine: (no-preload-525875) Waiting to get IP...
	I0817 22:23:55.640405  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.640824  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.640956  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.640807  256033 retry.go:31] will retry after 256.854902ms: waiting for machine to come up
	I0817 22:23:55.899499  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:55.900003  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:55.900027  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:55.899976  256033 retry.go:31] will retry after 327.686689ms: waiting for machine to come up
	I0817 22:23:56.229604  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.230132  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.230156  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.230040  256033 retry.go:31] will retry after 464.52975ms: waiting for machine to come up
	I0817 22:23:56.695962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:56.696359  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:56.696397  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:56.696313  256033 retry.go:31] will retry after 556.975938ms: waiting for machine to come up
	I0817 22:23:57.255156  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.255625  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.255664  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.255564  256033 retry.go:31] will retry after 654.756806ms: waiting for machine to come up
	I0817 22:23:57.911407  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:57.911781  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:57.911805  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:57.911733  256033 retry.go:31] will retry after 915.751745ms: waiting for machine to come up
	I0817 22:23:59.364671  254975 start.go:365] acquiring machines lock for old-k8s-version-294781: {Name:mk213773f144676c6fbe559fb9c7befe36415f86 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0817 22:23:58.828834  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:23:58.829178  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:23:58.829236  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:23:58.829153  256033 retry.go:31] will retry after 1.176413613s: waiting for machine to come up
	I0817 22:24:00.006988  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:00.007533  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:00.007603  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:00.007525  256033 retry.go:31] will retry after 1.031006631s: waiting for machine to come up
	I0817 22:24:01.039920  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:01.040354  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:01.040386  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:01.040293  256033 retry.go:31] will retry after 1.781447675s: waiting for machine to come up
	I0817 22:24:02.823240  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:02.823711  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:02.823755  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:02.823652  256033 retry.go:31] will retry after 1.47392319s: waiting for machine to come up
	I0817 22:24:04.299094  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:04.299543  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:04.299572  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:04.299479  256033 retry.go:31] will retry after 1.990284782s: waiting for machine to come up
	I0817 22:24:06.292369  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:06.292831  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:06.292862  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:06.292749  256033 retry.go:31] will retry after 3.34318874s: waiting for machine to come up
	I0817 22:24:09.637907  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:09.638389  255057 main.go:141] libmachine: (no-preload-525875) DBG | unable to find current IP address of domain no-preload-525875 in network mk-no-preload-525875
	I0817 22:24:09.638423  255057 main.go:141] libmachine: (no-preload-525875) DBG | I0817 22:24:09.638335  256033 retry.go:31] will retry after 3.298106143s: waiting for machine to come up
	I0817 22:24:12.939215  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939668  255057 main.go:141] libmachine: (no-preload-525875) Found IP for machine: 192.168.61.196
	I0817 22:24:12.939692  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has current primary IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.939709  255057 main.go:141] libmachine: (no-preload-525875) Reserving static IP address...
	I0817 22:24:12.940293  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.940330  255057 main.go:141] libmachine: (no-preload-525875) Reserved static IP address: 192.168.61.196
	I0817 22:24:12.940347  255057 main.go:141] libmachine: (no-preload-525875) DBG | skip adding static IP to network mk-no-preload-525875 - found existing host DHCP lease matching {name: "no-preload-525875", mac: "52:54:00:5a:56:e4", ip: "192.168.61.196"}
	I0817 22:24:12.940364  255057 main.go:141] libmachine: (no-preload-525875) DBG | Getting to WaitForSSH function...
	I0817 22:24:12.940381  255057 main.go:141] libmachine: (no-preload-525875) Waiting for SSH to be available...
	I0817 22:24:12.942523  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.942835  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:12.942870  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:12.943013  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH client type: external
	I0817 22:24:12.943058  255057 main.go:141] libmachine: (no-preload-525875) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa (-rw-------)
	I0817 22:24:12.943104  255057 main.go:141] libmachine: (no-preload-525875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:12.943125  255057 main.go:141] libmachine: (no-preload-525875) DBG | About to run SSH command:
	I0817 22:24:12.943135  255057 main.go:141] libmachine: (no-preload-525875) DBG | exit 0
	I0817 22:24:14.123211  255215 start.go:369] acquired machines lock for "embed-certs-437183" in 4m31.345681226s
	I0817 22:24:14.123281  255215 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:14.123298  255215 fix.go:54] fixHost starting: 
	I0817 22:24:14.123769  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:14.123822  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:14.141321  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0817 22:24:14.141722  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:14.142372  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:24:14.142409  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:14.142871  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:14.143076  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:14.143300  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:24:14.144928  255215 fix.go:102] recreateIfNeeded on embed-certs-437183: state=Stopped err=<nil>
	I0817 22:24:14.144960  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	W0817 22:24:14.145216  255215 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:14.148036  255215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-437183" ...
	I0817 22:24:13.033987  255057 main.go:141] libmachine: (no-preload-525875) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:13.034450  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetConfigRaw
	I0817 22:24:13.035251  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.037756  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038141  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.038176  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.038475  255057 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/config.json ...
	I0817 22:24:13.038679  255057 machine.go:88] provisioning docker machine ...
	I0817 22:24:13.038704  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.038922  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039086  255057 buildroot.go:166] provisioning hostname "no-preload-525875"
	I0817 22:24:13.039109  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.039238  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.041385  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041666  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.041698  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.041838  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.042022  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042206  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.042396  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.042612  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.043170  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.043189  255057 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-525875 && echo "no-preload-525875" | sudo tee /etc/hostname
	I0817 22:24:13.177388  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-525875
	
	I0817 22:24:13.177433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.180249  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180571  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.180599  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.180808  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.181054  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181224  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.181371  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.181544  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.181969  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.181994  255057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-525875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-525875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-525875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:13.307614  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:13.307675  255057 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:13.307719  255057 buildroot.go:174] setting up certificates
	I0817 22:24:13.307731  255057 provision.go:83] configureAuth start
	I0817 22:24:13.307745  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetMachineName
	I0817 22:24:13.308044  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:13.311084  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311457  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.311491  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.311665  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.313712  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314066  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.314101  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.314252  255057 provision.go:138] copyHostCerts
	I0817 22:24:13.314354  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:13.314397  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:13.314495  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:13.314610  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:13.314623  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:13.314661  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:13.314735  255057 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:13.314745  255057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:13.314779  255057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:13.314841  255057 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.no-preload-525875 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube no-preload-525875]
	I0817 22:24:13.395589  255057 provision.go:172] copyRemoteCerts
	I0817 22:24:13.395693  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:13.395724  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.398603  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.398936  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.398972  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.399154  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.399379  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.399566  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.399717  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.487194  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:13.510918  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 22:24:13.534013  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:13.556876  255057 provision.go:86] duration metric: configureAuth took 249.122979ms
	I0817 22:24:13.556910  255057 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:13.557143  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:13.557265  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.560140  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560483  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.560514  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.560748  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.560965  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561143  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.561274  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.561516  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:13.562128  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:13.562155  255057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:13.863145  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:13.863181  255057 machine.go:91] provisioned docker machine in 824.487372ms
	I0817 22:24:13.863206  255057 start.go:300] post-start starting for "no-preload-525875" (driver="kvm2")
	I0817 22:24:13.863219  255057 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:13.863247  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:13.863636  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:13.863681  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:13.866612  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.866950  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:13.867000  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:13.867115  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:13.867333  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:13.867524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:13.867695  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:13.957157  255057 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:13.961765  255057 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:13.961801  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:13.961919  255057 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:13.962002  255057 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:13.962116  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:13.971105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:13.999336  255057 start.go:303] post-start completed in 136.111451ms
	I0817 22:24:13.999367  255057 fix.go:56] fixHost completed within 19.636437946s
	I0817 22:24:13.999391  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.002294  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002689  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.002717  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.002995  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.003236  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003433  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.003572  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.003744  255057 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:14.004145  255057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I0817 22:24:14.004160  255057 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:14.122987  255057 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311054.069328214
	
	I0817 22:24:14.123011  255057 fix.go:206] guest clock: 1692311054.069328214
	I0817 22:24:14.123019  255057 fix.go:219] Guest: 2023-08-17 22:24:14.069328214 +0000 UTC Remote: 2023-08-17 22:24:13.999370872 +0000 UTC m=+291.082280559 (delta=69.957342ms)
	I0817 22:24:14.123080  255057 fix.go:190] guest clock delta is within tolerance: 69.957342ms
	I0817 22:24:14.123087  255057 start.go:83] releasing machines lock for "no-preload-525875", held for 19.760401588s
	I0817 22:24:14.123125  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.123445  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:14.126573  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.126925  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.126962  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.127146  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127781  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.127974  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:14.128071  255057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:14.128125  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.128226  255057 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:14.128258  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:14.131020  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131333  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131367  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131390  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131524  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.131715  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.131789  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:14.131829  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:14.131895  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.131975  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:14.132057  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.132156  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:14.132272  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:14.132425  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:14.219665  255057 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:14.247437  255057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:14.400674  255057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:14.408384  255057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:14.408502  255057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:14.423811  255057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:14.423860  255057 start.go:466] detecting cgroup driver to use...
	I0817 22:24:14.423953  255057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:14.436628  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:14.448671  255057 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:14.448765  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:14.461946  255057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:14.475294  255057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:14.581194  255057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:14.708045  255057 docker.go:212] disabling docker service ...
	I0817 22:24:14.708110  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:14.722033  255057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:14.733323  255057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:14.857587  255057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:14.980798  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:14.994728  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:15.012428  255057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:15.012505  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.021683  255057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:15.021763  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.031095  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.040825  255057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:15.050770  255057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:15.060644  255057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:15.068941  255057 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:15.069022  255057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:15.081634  255057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:15.090552  255057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:15.205174  255057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:15.383127  255057 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:15.383224  255057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:15.391893  255057 start.go:534] Will wait 60s for crictl version
	I0817 22:24:15.391983  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.398121  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:15.450273  255057 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:15.450368  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.506757  255057 ssh_runner.go:195] Run: crio --version
	I0817 22:24:15.560170  255057 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.1 ...
	I0817 22:24:14.149845  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Start
	I0817 22:24:14.150032  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring networks are active...
	I0817 22:24:14.150803  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network default is active
	I0817 22:24:14.151110  255215 main.go:141] libmachine: (embed-certs-437183) Ensuring network mk-embed-certs-437183 is active
	I0817 22:24:14.151492  255215 main.go:141] libmachine: (embed-certs-437183) Getting domain xml...
	I0817 22:24:14.152247  255215 main.go:141] libmachine: (embed-certs-437183) Creating domain...
	I0817 22:24:15.472135  255215 main.go:141] libmachine: (embed-certs-437183) Waiting to get IP...
	I0817 22:24:15.473014  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.473413  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.473492  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.473421  256157 retry.go:31] will retry after 194.38634ms: waiting for machine to come up
	I0817 22:24:15.670047  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:15.670479  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:15.670528  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:15.670445  256157 retry.go:31] will retry after 332.988154ms: waiting for machine to come up
	I0817 22:24:16.005357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.005862  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.005898  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.005790  256157 retry.go:31] will retry after 376.364025ms: waiting for machine to come up
	I0817 22:24:16.384423  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.384866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.384916  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.384805  256157 retry.go:31] will retry after 392.048125ms: waiting for machine to come up
	I0817 22:24:16.778356  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:16.778744  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:16.778780  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:16.778683  256157 retry.go:31] will retry after 688.962088ms: waiting for machine to come up
	I0817 22:24:17.469767  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:17.470257  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:17.470287  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:17.470211  256157 retry.go:31] will retry after 660.617465ms: waiting for machine to come up
	I0817 22:24:15.561695  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetIP
	I0817 22:24:15.564750  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565097  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:15.565127  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:15.565409  255057 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:15.569673  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:15.584980  255057 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 22:24:15.585030  255057 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:15.617365  255057 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0817 22:24:15.617396  255057 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.0-rc.1 registry.k8s.io/kube-controller-manager:v1.28.0-rc.1 registry.k8s.io/kube-scheduler:v1.28.0-rc.1 registry.k8s.io/kube-proxy:v1.28.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:24:15.617470  255057 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.617497  255057 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.617529  255057 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.617606  255057 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.617541  255057 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.617637  255057 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0817 22:24:15.617507  255057 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.617985  255057 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619154  255057 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0817 22:24:15.619338  255057 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.619355  255057 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.619350  255057 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.619369  255057 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.619335  255057 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.619381  255057 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.619414  255057 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.793551  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.793935  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.796339  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.797436  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.806385  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:15.813161  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0817 22:24:15.840200  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:15.935464  255057 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:15.940863  255057 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0817 22:24:15.940940  255057 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:15.940881  255057 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" does not exist at hash "046b029a42f58ef5cfea828a3b6eef129976080fc6305859555b8d772d45a8fd" in container runtime
	I0817 22:24:15.941028  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.941031  255057 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:15.941115  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952609  255057 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" does not exist at hash "e7e0b28eb3885ef3c56036ba8563c19f8f1845b9f7aa5079f36fee055a46f4ef" in container runtime
	I0817 22:24:15.952687  255057 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:15.952709  255057 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0817 22:24:15.952741  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:15.952751  255057 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:15.952790  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.007640  255057 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" does not exist at hash "2baa625e9aff176ef248fbe509f44c44ba2fbc1aee28537b717b8b55f399d77d" in container runtime
	I0817 22:24:16.007686  255057 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.007740  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099763  255057 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.0-rc.1" does not exist at hash "cd22d05ce6c939ae57ff699d4a1e257ce026cabceeca099b52d9ddcb5f5111c8" in container runtime
	I0817 22:24:16.099817  255057 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.099873  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.099909  255057 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0817 22:24:16.099969  255057 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.099980  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 22:24:16.100019  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:24:16.100052  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0817 22:24:16.100127  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 22:24:16.100145  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0817 22:24:16.100198  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 22:24:16.105175  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 22:24:16.197301  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0817 22:24:16.197377  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197418  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197432  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.197437  255057 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:16.197476  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.197421  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:16.197520  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:16.197535  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:16.214043  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0817 22:24:16.214189  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:16.225659  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1 (exists)
	I0817 22:24:16.225690  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225750  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1
	I0817 22:24:16.225882  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.225973  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:16.229070  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1 (exists)
	I0817 22:24:16.229235  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1 (exists)
	I0817 22:24:16.258828  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0817 22:24:16.258905  255057 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0817 22:24:16.258990  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0817 22:24:16.259013  255057 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:18.132851  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:18.133243  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:18.133310  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:18.133225  256157 retry.go:31] will retry after 900.178694ms: waiting for machine to come up
	I0817 22:24:19.035179  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:19.035579  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:19.035615  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:19.035514  256157 retry.go:31] will retry after 1.198702878s: waiting for machine to come up
	I0817 22:24:20.236711  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:20.237240  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:20.237273  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:20.237201  256157 retry.go:31] will retry after 1.809846012s: waiting for machine to come up
	I0817 22:24:22.048866  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:22.049357  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:22.049392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:22.049300  256157 retry.go:31] will retry after 1.671738979s: waiting for machine to come up
	I0817 22:24:18.395405  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.0-rc.1: (2.169611406s)
	I0817 22:24:18.395443  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1 from cache
	I0817 22:24:18.395478  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (2.169478272s)
	I0817 22:24:18.395493  255057 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.136469625s)
	I0817 22:24:18.395493  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:18.395509  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0817 22:24:18.395512  255057 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1 (exists)
	I0817 22:24:18.395560  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1
	I0817 22:24:20.871009  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.0-rc.1: (2.475415377s)
	I0817 22:24:20.871043  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1 from cache
	I0817 22:24:20.871073  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:20.871129  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1
	I0817 22:24:23.722312  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:23.722829  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:23.722864  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:23.722757  256157 retry.go:31] will retry after 1.856182792s: waiting for machine to come up
	I0817 22:24:25.580432  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:25.580936  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:25.580969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:25.580873  256157 retry.go:31] will retry after 2.404448523s: waiting for machine to come up
	I0817 22:24:23.529377  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.0-rc.1: (2.658213494s)
	I0817 22:24:23.529418  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1 from cache
	I0817 22:24:23.529456  255057 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:23.529532  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0817 22:24:24.907071  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.377507339s)
	I0817 22:24:24.907105  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0817 22:24:24.907135  255057 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:24.907203  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0817 22:24:27.988784  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:27.989226  255215 main.go:141] libmachine: (embed-certs-437183) DBG | unable to find current IP address of domain embed-certs-437183 in network mk-embed-certs-437183
	I0817 22:24:27.989252  255215 main.go:141] libmachine: (embed-certs-437183) DBG | I0817 22:24:27.989214  256157 retry.go:31] will retry after 4.145677854s: waiting for machine to come up
	I0817 22:24:32.139031  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139722  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has current primary IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.139755  255215 main.go:141] libmachine: (embed-certs-437183) Found IP for machine: 192.168.39.186
	I0817 22:24:32.139768  255215 main.go:141] libmachine: (embed-certs-437183) Reserving static IP address...
	I0817 22:24:32.140361  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.140408  255215 main.go:141] libmachine: (embed-certs-437183) Reserved static IP address: 192.168.39.186
	I0817 22:24:32.140428  255215 main.go:141] libmachine: (embed-certs-437183) DBG | skip adding static IP to network mk-embed-certs-437183 - found existing host DHCP lease matching {name: "embed-certs-437183", mac: "52:54:00:c7:c0:2b", ip: "192.168.39.186"}
	I0817 22:24:32.140450  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Getting to WaitForSSH function...
	I0817 22:24:32.140465  255215 main.go:141] libmachine: (embed-certs-437183) Waiting for SSH to be available...
	I0817 22:24:32.142752  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143141  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.143192  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.143343  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH client type: external
	I0817 22:24:32.143392  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa (-rw-------)
	I0817 22:24:32.143431  255215 main.go:141] libmachine: (embed-certs-437183) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:32.143459  255215 main.go:141] libmachine: (embed-certs-437183) DBG | About to run SSH command:
	I0817 22:24:32.143475  255215 main.go:141] libmachine: (embed-certs-437183) DBG | exit 0
	I0817 22:24:32.246211  255215 main.go:141] libmachine: (embed-certs-437183) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:32.246582  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetConfigRaw
	I0817 22:24:32.247284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.249789  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250204  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.250237  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.250567  255215 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/config.json ...
	I0817 22:24:32.250808  255215 machine.go:88] provisioning docker machine ...
	I0817 22:24:32.250831  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:32.251049  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251209  255215 buildroot.go:166] provisioning hostname "embed-certs-437183"
	I0817 22:24:32.251230  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.251344  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.253729  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254094  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.254124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.254276  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.254434  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254654  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.254817  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.254981  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.255466  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.255481  255215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-437183 && echo "embed-certs-437183" | sudo tee /etc/hostname
	I0817 22:24:32.412247  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-437183
	
	I0817 22:24:32.412284  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.415194  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415508  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.415561  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.415666  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.415910  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416113  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.416297  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.416501  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.417004  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.417024  255215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-437183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-437183/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-437183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:32.559200  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:32.559253  255215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:32.559282  255215 buildroot.go:174] setting up certificates
	I0817 22:24:32.559299  255215 provision.go:83] configureAuth start
	I0817 22:24:32.559313  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetMachineName
	I0817 22:24:32.559696  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:32.562469  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.562960  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.562989  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.563141  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.565760  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566120  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.566178  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.566344  255215 provision.go:138] copyHostCerts
	I0817 22:24:32.566427  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:32.566443  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:32.566504  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:32.566633  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:32.566642  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:32.566676  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:32.566730  255215 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:32.566738  255215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:32.566755  255215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:32.566803  255215 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.embed-certs-437183 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube embed-certs-437183]
	I0817 22:24:31.437386  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.530148826s)
	I0817 22:24:31.437453  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0817 22:24:31.437478  255057 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:31.437578  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0817 22:24:32.398228  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0817 22:24:32.398294  255057 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:32.398359  255057 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1
	I0817 22:24:33.487487  255491 start.go:369] acquired machines lock for "default-k8s-diff-port-321287" in 4m16.661569765s
	I0817 22:24:33.487552  255491 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:33.487569  255491 fix.go:54] fixHost starting: 
	I0817 22:24:33.488059  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:33.488104  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:33.506430  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0817 22:24:33.506958  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:33.507587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:24:33.507618  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:33.508078  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:33.508296  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:33.508471  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:24:33.510492  255491 fix.go:102] recreateIfNeeded on default-k8s-diff-port-321287: state=Stopped err=<nil>
	I0817 22:24:33.510539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	W0817 22:24:33.510738  255491 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:33.512965  255491 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-321287" ...
	I0817 22:24:32.687763  255215 provision.go:172] copyRemoteCerts
	I0817 22:24:32.687835  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:32.687864  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.690614  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.690921  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.690963  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.691253  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.691469  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.691631  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.691745  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:32.788388  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:24:32.811861  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:32.835407  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0817 22:24:32.858542  255215 provision.go:86] duration metric: configureAuth took 299.225654ms
	I0817 22:24:32.858581  255215 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:32.858850  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:32.858989  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:32.861726  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862140  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:32.862186  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:32.862436  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:32.862717  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.862961  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:32.863135  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:32.863321  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:32.863744  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:32.863762  255215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:33.202904  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:33.202942  255215 machine.go:91] provisioned docker machine in 952.11856ms
	I0817 22:24:33.202986  255215 start.go:300] post-start starting for "embed-certs-437183" (driver="kvm2")
	I0817 22:24:33.203002  255215 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:33.203039  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.203427  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:33.203465  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.206544  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.206969  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.207004  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.207154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.207407  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.207591  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.207747  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.304648  255215 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:33.309404  255215 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:33.309435  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:33.309536  255215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:33.309635  255215 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:33.309752  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:33.318682  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:33.343830  255215 start.go:303] post-start completed in 140.8201ms
	I0817 22:24:33.343870  255215 fix.go:56] fixHost completed within 19.220571855s
	I0817 22:24:33.343901  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.347196  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347625  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.347658  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.347927  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.348154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348336  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.348487  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.348741  255215 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:33.349346  255215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0817 22:24:33.349361  255215 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:33.487290  255215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311073.433845199
	
	I0817 22:24:33.487319  255215 fix.go:206] guest clock: 1692311073.433845199
	I0817 22:24:33.487331  255215 fix.go:219] Guest: 2023-08-17 22:24:33.433845199 +0000 UTC Remote: 2023-08-17 22:24:33.343875474 +0000 UTC m=+290.714391364 (delta=89.969725ms)
	I0817 22:24:33.487370  255215 fix.go:190] guest clock delta is within tolerance: 89.969725ms
	I0817 22:24:33.487378  255215 start.go:83] releasing machines lock for "embed-certs-437183", held for 19.364124776s
	I0817 22:24:33.487412  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.487714  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:33.490444  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.490945  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.490975  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.491191  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492024  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492278  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:24:33.492378  255215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:33.492440  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.492569  255215 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:33.492600  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:24:33.495461  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495742  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.495836  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.495879  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496124  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:33.496130  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496147  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:33.496287  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:24:33.496341  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496445  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:24:33.496604  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496605  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:24:33.496792  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.496886  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:24:33.634234  255215 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:33.642529  255215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:33.802107  255215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:33.808439  255215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:33.808520  255215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:33.823947  255215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:33.823975  255215 start.go:466] detecting cgroup driver to use...
	I0817 22:24:33.824058  255215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:33.839665  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:33.854435  255215 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:33.854512  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:33.871530  255215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:33.886466  255215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:34.017312  255215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:34.152720  255215 docker.go:212] disabling docker service ...
	I0817 22:24:34.152811  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:34.170506  255215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:34.186072  255215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:34.327678  255215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:34.450774  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:34.468330  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:34.491610  255215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:34.491684  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.506266  255215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:34.506360  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.517471  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.531351  255215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:34.542363  255215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:34.553383  255215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:34.562937  255215 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:34.563029  255215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:34.575978  255215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:34.588500  255215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:34.715821  255215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:34.912771  255215 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:34.912853  255215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:34.918377  255215 start.go:534] Will wait 60s for crictl version
	I0817 22:24:34.918445  255215 ssh_runner.go:195] Run: which crictl
	I0817 22:24:34.922462  255215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:34.962654  255215 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:34.962754  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.020574  255215 ssh_runner.go:195] Run: crio --version
	I0817 22:24:35.078516  255215 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:33.514448  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Start
	I0817 22:24:33.514667  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring networks are active...
	I0817 22:24:33.515504  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network default is active
	I0817 22:24:33.515973  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Ensuring network mk-default-k8s-diff-port-321287 is active
	I0817 22:24:33.516607  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Getting domain xml...
	I0817 22:24:33.517407  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Creating domain...
	I0817 22:24:35.032992  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting to get IP...
	I0817 22:24:35.034213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.034833  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.034747  256286 retry.go:31] will retry after 255.561446ms: waiting for machine to come up
	I0817 22:24:35.292497  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293071  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.293110  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.293035  256286 retry.go:31] will retry after 265.433217ms: waiting for machine to come up
	I0817 22:24:35.560591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.561221  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.561138  256286 retry.go:31] will retry after 429.726379ms: waiting for machine to come up
	I0817 22:24:35.993046  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993539  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:35.993573  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:35.993482  256286 retry.go:31] will retry after 583.273043ms: waiting for machine to come up
	I0817 22:24:36.578452  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578943  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:36.578983  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:36.578889  256286 retry.go:31] will retry after 504.577651ms: waiting for machine to come up
	I0817 22:24:35.080561  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetIP
	I0817 22:24:35.083955  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084338  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:24:35.084376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:24:35.084624  255215 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:35.088994  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:35.104758  255215 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:35.104814  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:35.140529  255215 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:35.140606  255215 ssh_runner.go:195] Run: which lz4
	I0817 22:24:35.144869  255215 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:35.149131  255215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:35.149168  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:24:37.067793  255215 crio.go:444] Took 1.922962 seconds to copy over tarball
	I0817 22:24:37.067867  255215 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:34.276465  255057 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.0-rc.1: (1.878070898s)
	I0817 22:24:34.276495  255057 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.0-rc.1 from cache
	I0817 22:24:34.276528  255057 cache_images.go:123] Successfully loaded all cached images
	I0817 22:24:34.276535  255057 cache_images.go:92] LoadImages completed in 18.659123421s
	I0817 22:24:34.276651  255057 ssh_runner.go:195] Run: crio config
	I0817 22:24:34.349440  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:34.349470  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:34.349525  255057 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:34.349559  255057 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8443 KubernetesVersion:v1.28.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-525875 NodeName:no-preload-525875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:34.349737  255057 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-525875"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:34.349852  255057 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-525875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:34.349927  255057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0-rc.1
	I0817 22:24:34.361082  255057 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:34.361211  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:34.370571  255057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0817 22:24:34.390596  255057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 22:24:34.409602  255057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0817 22:24:34.431076  255057 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:34.435869  255057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:34.448753  255057 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875 for IP: 192.168.61.196
	I0817 22:24:34.448854  255057 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:34.449077  255057 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:34.449125  255057 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:34.449229  255057 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/client.key
	I0817 22:24:34.449287  255057 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key.0d67e2f2
	I0817 22:24:34.449320  255057 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key
	I0817 22:24:34.449438  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:34.449466  255057 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:34.449476  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:34.449499  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:34.449523  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:34.449545  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:34.449586  255057 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:34.450600  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:34.481454  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:24:34.514638  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:34.539306  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/no-preload-525875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:24:34.565390  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:34.595648  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:34.628105  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:34.654925  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:34.684138  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:34.709433  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:34.736933  255057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:34.772217  255057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:34.790940  255057 ssh_runner.go:195] Run: openssl version
	I0817 22:24:34.800419  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:34.811545  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819623  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.819697  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:34.825793  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:34.836531  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:34.847239  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852331  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.852394  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:34.861659  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:34.871817  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:34.883257  255057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889654  255057 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.889728  255057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:34.897773  255057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:34.909259  255057 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:34.914775  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:34.921549  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:34.928370  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:34.934849  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:34.941470  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:34.949932  255057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:34.956863  255057 kubeadm.go:404] StartCluster: {Name:no-preload-525875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-525
875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:34.957036  255057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:34.957123  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:35.005195  255057 cri.go:89] found id: ""
	I0817 22:24:35.005282  255057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:35.015727  255057 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:35.015754  255057 kubeadm.go:636] restartCluster start
	I0817 22:24:35.015821  255057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:35.025333  255057 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.026796  255057 kubeconfig.go:92] found "no-preload-525875" server: "https://192.168.61.196:8443"
	I0817 22:24:35.030361  255057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:35.040698  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.040754  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.055650  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.055675  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.055719  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.066812  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:35.567215  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:35.567291  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:35.580471  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.066958  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.067035  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.081758  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:36.567234  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:36.567320  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:36.582474  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.066970  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.067060  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.079066  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.567780  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:37.567887  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:37.583652  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:37.085672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086184  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.086222  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.086130  256286 retry.go:31] will retry after 660.028004ms: waiting for machine to come up
	I0817 22:24:37.747563  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748056  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:37.748086  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:37.748020  256286 retry.go:31] will retry after 798.952498ms: waiting for machine to come up
	I0817 22:24:38.548762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549243  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:38.549276  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:38.549193  256286 retry.go:31] will retry after 1.15249289s: waiting for machine to come up
	I0817 22:24:39.703164  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703739  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:39.703773  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:39.703675  256286 retry.go:31] will retry after 1.300284471s: waiting for machine to come up
	I0817 22:24:41.006289  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006781  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:41.006814  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:41.006717  256286 retry.go:31] will retry after 1.500753962s: waiting for machine to come up
	I0817 22:24:40.155737  255215 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.087825588s)
	I0817 22:24:40.155771  255215 crio.go:451] Took 3.087946 seconds to extract the tarball
	I0817 22:24:40.155784  255215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:24:40.196940  255215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:40.238837  255215 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:24:40.238863  255215 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:24:40.238934  255215 ssh_runner.go:195] Run: crio config
	I0817 22:24:40.302526  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:24:40.302552  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:40.302572  255215 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:24:40.302593  255215 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-437183 NodeName:embed-certs-437183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:24:40.302793  255215 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-437183"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:24:40.302860  255215 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-437183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:24:40.302914  255215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:24:40.312428  255215 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:24:40.312517  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:24:40.321824  255215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0817 22:24:40.340069  255215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:24:40.358609  255215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0817 22:24:40.376546  255215 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0817 22:24:40.380576  255215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:40.394264  255215 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183 for IP: 192.168.39.186
	I0817 22:24:40.394310  255215 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:40.394509  255215 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:24:40.394569  255215 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:24:40.394678  255215 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/client.key
	I0817 22:24:40.394749  255215 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key.d0691019
	I0817 22:24:40.394810  255215 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key
	I0817 22:24:40.394956  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:24:40.394999  255215 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:24:40.395013  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:24:40.395056  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:24:40.395096  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:24:40.395127  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:24:40.395197  255215 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:40.396122  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:24:40.421809  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:24:40.447412  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:24:40.472678  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/embed-certs-437183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:24:40.501303  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:24:40.528016  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:24:40.553741  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:24:40.581792  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:24:40.609270  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:24:40.634901  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:24:40.659698  255215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:24:40.685767  255215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:24:40.704114  255215 ssh_runner.go:195] Run: openssl version
	I0817 22:24:40.709921  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:24:40.720035  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725167  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.725232  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:24:40.731054  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:24:40.741277  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:24:40.751649  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757538  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.757621  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:24:40.763574  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:24:40.773786  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:24:40.784152  255215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790448  255215 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.790529  255215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:24:40.796689  255215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:24:40.806968  255215 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:24:40.811858  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:24:40.818172  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:24:40.824439  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:24:40.830588  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:24:40.836734  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:24:40.842857  255215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:24:40.849072  255215 kubeadm.go:404] StartCluster: {Name:embed-certs-437183 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-437183
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:24:40.849208  255215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:24:40.849269  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:40.882040  255215 cri.go:89] found id: ""
	I0817 22:24:40.882132  255215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:24:40.893833  255215 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:24:40.893859  255215 kubeadm.go:636] restartCluster start
	I0817 22:24:40.893926  255215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:24:40.906498  255215 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.907768  255215 kubeconfig.go:92] found "embed-certs-437183" server: "https://192.168.39.186:8443"
	I0817 22:24:40.910282  255215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:24:40.921945  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.922021  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.933335  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.933360  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.933417  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.944168  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.444996  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.445109  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.457502  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.944752  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.944881  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.960929  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.444350  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.444464  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.461555  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.066927  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.067043  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.082831  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:38.567259  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:38.567347  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:38.581544  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.067112  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.067211  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.078859  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:39.566916  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:39.567075  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:39.582637  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.067188  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.067286  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.082771  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:40.567236  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:40.567331  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:40.583192  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.067806  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.067953  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.082962  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:41.567559  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:41.567664  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:41.582761  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.067267  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.067357  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.078631  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.567181  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.567299  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.583270  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:42.509044  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509662  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:42.509688  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:42.509599  256286 retry.go:31] will retry after 2.726859315s: waiting for machine to come up
	I0817 22:24:45.239162  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239727  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:45.239756  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:45.239667  256286 retry.go:31] will retry after 2.868820101s: waiting for machine to come up
	I0817 22:24:42.944983  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:42.945083  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:42.960949  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.444415  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.444541  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.460157  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.944659  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.944757  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.960506  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.444408  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.444544  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.460666  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.944252  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.944358  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.956137  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.444667  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.444779  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.460524  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.944710  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:45.945003  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:45.961038  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.444556  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.444684  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.459345  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:46.944760  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:46.944858  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:46.961217  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:47.444786  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.444935  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.460748  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.067683  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.067794  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.083038  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:43.567750  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:43.567850  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:43.579427  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.066928  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.067014  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.078671  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:44.567463  255057 api_server.go:166] Checking apiserver status ...
	I0817 22:24:44.567559  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:44.579377  255057 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:45.041151  255057 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:45.041202  255057 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:45.041218  255057 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:45.041279  255057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:45.080480  255057 cri.go:89] found id: ""
	I0817 22:24:45.080569  255057 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:45.096518  255057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:45.107778  255057 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:45.107880  255057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117115  255057 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:45.117151  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.269517  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.790366  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:45.988106  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.124121  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:46.219342  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:46.219438  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.241849  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:46.795050  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.295314  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:47.795361  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.111566  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:48.112173  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:48.112079  256286 retry.go:31] will retry after 3.129130141s: waiting for machine to come up
	I0817 22:24:51.245244  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245759  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | unable to find current IP address of domain default-k8s-diff-port-321287 in network mk-default-k8s-diff-port-321287
	I0817 22:24:51.245788  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | I0817 22:24:51.245707  256286 retry.go:31] will retry after 4.573749963s: waiting for machine to come up
	I0817 22:24:47.944303  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:47.944406  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:47.960613  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.445144  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.445245  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.460221  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:48.944726  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:48.944811  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:48.958575  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.444744  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.444875  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.460348  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:49.944986  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:49.945117  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:49.958396  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.445013  255215 api_server.go:166] Checking apiserver status ...
	I0817 22:24:50.445110  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:24:50.459941  255215 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:24:50.922423  255215 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:24:50.922493  255215 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:24:50.922513  255215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:24:50.922581  255215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:24:50.964064  255215 cri.go:89] found id: ""
	I0817 22:24:50.964154  255215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:24:50.980513  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:24:50.990086  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:24:50.990152  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999907  255215 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:24:50.999935  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:51.147593  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.150655  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.002996323s)
	I0817 22:24:52.150694  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.367611  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.461186  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:52.534447  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:24:52.534547  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:52.551513  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.295087  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.794596  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:48.817042  255057 api_server.go:72] duration metric: took 2.597699698s to wait for apiserver process to appear ...
	I0817 22:24:48.817069  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:48.817086  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.817615  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:48.817653  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:48.818012  255057 api_server.go:269] stopped: https://192.168.61.196:8443/healthz: Get "https://192.168.61.196:8443/healthz": dial tcp 192.168.61.196:8443: connect: connection refused
	I0817 22:24:49.318894  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.160567  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.160612  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.160627  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.246065  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:52.246117  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:52.318300  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.394871  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.394932  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:52.818493  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:52.825349  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:52.825391  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.318277  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.324705  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:24:53.324751  255057 api_server.go:103] status: https://192.168.61.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:24:53.818240  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:24:53.823823  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:24:53.834528  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:24:53.834573  255057 api_server.go:131] duration metric: took 5.01749639s to wait for apiserver health ...
	I0817 22:24:53.834586  255057 cni.go:84] Creating CNI manager for ""
	I0817 22:24:53.834596  255057 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:24:53.836827  255057 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:53.838602  255057 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:24:53.850880  255057 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:24:53.871556  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:24:53.886793  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:24:53.886858  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:24:53.886875  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:24:53.886889  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:24:53.886902  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:24:53.886922  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:24:53.886939  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:24:53.886948  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:24:53.886961  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:24:53.886975  255057 system_pods.go:74] duration metric: took 15.392207ms to wait for pod list to return data ...
	I0817 22:24:53.886988  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:24:53.891527  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:24:53.891589  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:24:53.891630  255057 node_conditions.go:105] duration metric: took 4.635197ms to run NodePressure ...
	I0817 22:24:53.891656  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:24:54.230065  255057 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239113  255057 kubeadm.go:787] kubelet initialised
	I0817 22:24:54.239146  255057 kubeadm.go:788] duration metric: took 9.048225ms waiting for restarted kubelet to initialise ...
	I0817 22:24:54.239159  255057 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:54.251454  255057 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.266584  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266619  255057 pod_ready.go:81] duration metric: took 15.127554ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.266633  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.266645  255057 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.278901  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278932  255057 pod_ready.go:81] duration metric: took 12.266962ms waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.278944  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "etcd-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.278952  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.297982  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298020  255057 pod_ready.go:81] duration metric: took 19.058778ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.298032  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-apiserver-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.298047  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.309929  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309967  255057 pod_ready.go:81] duration metric: took 11.898508ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.309980  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.309991  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:54.676448  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676495  255057 pod_ready.go:81] duration metric: took 366.48994ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:54.676507  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-proxy-pzpk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:54.676547  255057 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.078351  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078392  255057 pod_ready.go:81] duration metric: took 401.831269ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.078405  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "kube-scheduler-no-preload-525875" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.078416  255057 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:55.476059  255057 pod_ready.go:97] node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476101  255057 pod_ready.go:81] duration metric: took 397.677369ms waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:24:55.476111  255057 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-525875" hosting pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:55.476121  255057 pod_ready.go:38] duration metric: took 1.236947103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:24:55.476143  255057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:24:55.487413  255057 ops.go:34] apiserver oom_adj: -16
	I0817 22:24:55.487448  255057 kubeadm.go:640] restartCluster took 20.471686915s
	I0817 22:24:55.487459  255057 kubeadm.go:406] StartCluster complete in 20.530629906s
	I0817 22:24:55.487482  255057 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.487591  255057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:24:55.489799  255057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:24:55.490091  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:24:55.490202  255057 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:24:55.490349  255057 config.go:182] Loaded profile config "no-preload-525875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 22:24:55.490375  255057 addons.go:69] Setting storage-provisioner=true in profile "no-preload-525875"
	I0817 22:24:55.490380  255057 addons.go:69] Setting metrics-server=true in profile "no-preload-525875"
	I0817 22:24:55.490397  255057 addons.go:231] Setting addon storage-provisioner=true in "no-preload-525875"
	I0817 22:24:55.490404  255057 addons.go:231] Setting addon metrics-server=true in "no-preload-525875"
	W0817 22:24:55.490409  255057 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:24:55.490435  255057 addons.go:69] Setting default-storageclass=true in profile "no-preload-525875"
	I0817 22:24:55.490465  255057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-525875"
	I0817 22:24:55.490474  255057 host.go:66] Checking if "no-preload-525875" exists ...
	W0817 22:24:55.490413  255057 addons.go:240] addon metrics-server should already be in state true
	I0817 22:24:55.490547  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.491607  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.491742  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492181  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492232  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.492255  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.492291  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.503335  255057 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-525875" context rescaled to 1 replicas
	I0817 22:24:55.503399  255057 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:24:55.505836  255057 out.go:177] * Verifying Kubernetes components...
	I0817 22:24:55.507438  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:24:55.512841  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0817 22:24:55.513126  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0817 22:24:55.513241  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42607
	I0817 22:24:55.513441  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513567  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.513770  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.514042  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514082  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514128  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514159  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514577  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514595  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.514708  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.514733  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.514804  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.515081  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.515186  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515223  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.515651  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.515699  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.532135  255057 addons.go:231] Setting addon default-storageclass=true in "no-preload-525875"
	W0817 22:24:55.532171  255057 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:24:55.532205  255057 host.go:66] Checking if "no-preload-525875" exists ...
	I0817 22:24:55.532614  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.532665  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.535464  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I0817 22:24:55.537205  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:24:55.537544  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.537676  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.538005  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538022  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538197  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.538209  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.538328  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538574  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.538694  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.538757  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.540907  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.541221  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.543481  255057 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:24:55.545233  255057 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:24:55.820955  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.821534  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Found IP for machine: 192.168.50.30
	I0817 22:24:55.821557  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserving static IP address...
	I0817 22:24:55.821590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has current primary IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.822134  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.822169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | skip adding static IP to network mk-default-k8s-diff-port-321287 - found existing host DHCP lease matching {name: "default-k8s-diff-port-321287", mac: "52:54:00:24:e5:b8", ip: "192.168.50.30"}
	I0817 22:24:55.822189  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Getting to WaitForSSH function...
	I0817 22:24:55.822212  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Reserved static IP address: 192.168.50.30
	I0817 22:24:55.822225  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Waiting for SSH to be available...
	I0817 22:24:55.825198  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825591  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.825630  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.825769  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH client type: external
	I0817 22:24:55.825802  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa (-rw-------)
	I0817 22:24:55.825837  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:24:55.825855  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | About to run SSH command:
	I0817 22:24:55.825874  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | exit 0
	I0817 22:24:55.923224  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | SSH cmd err, output: <nil>: 
	I0817 22:24:55.923669  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetConfigRaw
	I0817 22:24:55.924434  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:55.927453  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.927935  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.927987  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.928304  255491 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/config.json ...
	I0817 22:24:55.928581  255491 machine.go:88] provisioning docker machine ...
	I0817 22:24:55.928610  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:55.928818  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.928963  255491 buildroot.go:166] provisioning hostname "default-k8s-diff-port-321287"
	I0817 22:24:55.928984  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:55.929169  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:55.931672  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932179  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:55.932213  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:55.932379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:55.932606  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.932862  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:55.933008  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:55.933228  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:55.933895  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:55.933917  255491 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-321287 && echo "default-k8s-diff-port-321287" | sudo tee /etc/hostname
	I0817 22:24:56.066560  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-321287
	
	I0817 22:24:56.066599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.070072  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070509  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.070590  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.070901  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.071175  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071377  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.071589  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.071813  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.072479  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.072511  255491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-321287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-321287/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-321287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:24:56.210857  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:24:56.210897  255491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:24:56.210954  255491 buildroot.go:174] setting up certificates
	I0817 22:24:56.210968  255491 provision.go:83] configureAuth start
	I0817 22:24:56.210981  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetMachineName
	I0817 22:24:56.211435  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:56.214305  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214711  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.214762  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.214931  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.217766  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218200  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.218245  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.218444  255491 provision.go:138] copyHostCerts
	I0817 22:24:56.218519  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:24:56.218533  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:24:56.218609  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:24:56.218728  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:24:56.218738  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:24:56.218769  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:24:56.218846  255491 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:24:56.218856  255491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:24:56.218886  255491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:24:56.218953  255491 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-321287 san=[192.168.50.30 192.168.50.30 localhost 127.0.0.1 minikube default-k8s-diff-port-321287]
	I0817 22:24:56.289985  255491 provision.go:172] copyRemoteCerts
	I0817 22:24:56.290068  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:24:56.290104  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.293536  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.293996  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.294027  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.294218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.294456  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.294675  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.294866  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.386746  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:24:56.413448  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 22:24:56.438758  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 22:24:56.467489  255491 provision.go:86] duration metric: configureAuth took 256.504259ms
	I0817 22:24:56.467525  255491 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:24:56.467792  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:24:56.467917  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.470870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.471373  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.471601  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.471839  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472048  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.472218  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.472441  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.473139  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.473162  255491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:24:57.100503  254975 start.go:369] acquired machines lock for "old-k8s-version-294781" in 57.735745135s
	I0817 22:24:57.100571  254975 start.go:96] Skipping create...Using existing machine configuration
	I0817 22:24:57.100583  254975 fix.go:54] fixHost starting: 
	I0817 22:24:57.101120  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:57.101172  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:57.121393  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0817 22:24:57.122017  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:57.122807  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:24:57.122834  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:57.123289  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:57.123463  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:24:57.123584  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:24:57.125545  254975 fix.go:102] recreateIfNeeded on old-k8s-version-294781: state=Stopped err=<nil>
	I0817 22:24:57.125580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	W0817 22:24:57.125759  254975 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 22:24:57.127853  254975 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-294781" ...
	I0817 22:24:55.546816  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:24:55.546839  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:24:55.546870  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.545324  255057 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.546955  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:24:55.546971  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.551364  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552354  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.552580  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
	I0817 22:24:55.552920  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.552950  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553052  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.553160  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553171  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.553238  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.553408  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.553592  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553747  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.553751  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553805  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.553823  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.553914  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.553952  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554237  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.554648  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.554839  255057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:24:55.554878  255057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:24:55.594781  255057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0817 22:24:55.595253  255057 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:24:55.595928  255057 main.go:141] libmachine: Using API Version  1
	I0817 22:24:55.595955  255057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:24:55.596358  255057 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:24:55.596659  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetState
	I0817 22:24:55.598866  255057 main.go:141] libmachine: (no-preload-525875) Calling .DriverName
	I0817 22:24:55.599111  255057 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.599123  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:24:55.599141  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHHostname
	I0817 22:24:55.602520  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.602895  255057 main.go:141] libmachine: (no-preload-525875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:56:e4", ip: ""} in network mk-no-preload-525875: {Iface:virbr2 ExpiryTime:2023-08-17 23:14:53 +0000 UTC Type:0 Mac:52:54:00:5a:56:e4 Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:no-preload-525875 Clientid:01:52:54:00:5a:56:e4}
	I0817 22:24:55.602924  255057 main.go:141] libmachine: (no-preload-525875) DBG | domain no-preload-525875 has defined IP address 192.168.61.196 and MAC address 52:54:00:5a:56:e4 in network mk-no-preload-525875
	I0817 22:24:55.603114  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHPort
	I0817 22:24:55.603334  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHKeyPath
	I0817 22:24:55.603537  255057 main.go:141] libmachine: (no-preload-525875) Calling .GetSSHUsername
	I0817 22:24:55.603678  255057 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/no-preload-525875/id_rsa Username:docker}
	I0817 22:24:55.693508  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:24:55.693535  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:24:55.720303  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:24:55.739691  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:24:55.739725  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:24:55.752809  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:24:55.793480  255057 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:55.793512  255057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:24:55.805075  255057 node_ready.go:35] waiting up to 6m0s for node "no-preload-525875" to be "Ready" ...
	I0817 22:24:55.805164  255057 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 22:24:55.834328  255057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:24:57.451781  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.731427598s)
	I0817 22:24:57.451824  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.698971636s)
	I0817 22:24:57.451845  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451859  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.451876  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.451887  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452756  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.452808  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.452818  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.452832  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.452842  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.452965  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453000  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453009  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453019  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453027  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453173  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453247  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453270  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.453295  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.453306  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.453677  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.453709  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.453720  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.455299  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.455300  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.455325  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.564475  255057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.730071346s)
	I0817 22:24:57.564539  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.564551  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565087  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565160  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565170  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565185  255057 main.go:141] libmachine: Making call to close driver server
	I0817 22:24:57.565217  255057 main.go:141] libmachine: (no-preload-525875) Calling .Close
	I0817 22:24:57.565483  255057 main.go:141] libmachine: (no-preload-525875) DBG | Closing plugin on server side
	I0817 22:24:57.565530  255057 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:24:57.565539  255057 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:24:57.565550  255057 addons.go:467] Verifying addon metrics-server=true in "no-preload-525875"
	I0817 22:24:57.569420  255057 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:24:53.063998  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:53.564081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.064081  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:54.564321  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.064476  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:24:55.090168  255215 api_server.go:72] duration metric: took 2.555721263s to wait for apiserver process to appear ...
	I0817 22:24:55.090200  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:24:55.090223  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:57.571712  255057 addons.go:502] enable addons completed in 2.081503451s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:24:57.882753  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:24:56.835353  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:24:56.835388  255491 machine.go:91] provisioned docker machine in 906.787255ms
	I0817 22:24:56.835401  255491 start.go:300] post-start starting for "default-k8s-diff-port-321287" (driver="kvm2")
	I0817 22:24:56.835415  255491 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:24:56.835460  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:56.835881  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:24:56.835925  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.838868  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839240  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.839274  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.839366  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.839581  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.839808  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.839994  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:56.932979  255491 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:24:56.937642  255491 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:24:56.937675  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:24:56.937770  255491 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:24:56.937877  255491 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:24:56.938003  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:24:56.949478  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:24:56.975557  255491 start.go:303] post-start completed in 140.136722ms
	I0817 22:24:56.975589  255491 fix.go:56] fixHost completed within 23.488019817s
	I0817 22:24:56.975618  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:56.979039  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979486  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:56.979549  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:56.979673  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:56.979951  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980152  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:56.980301  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:56.980507  255491 main.go:141] libmachine: Using SSH client type: native
	I0817 22:24:56.981194  255491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0817 22:24:56.981211  255491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:24:57.100308  255491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311097.042275817
	
	I0817 22:24:57.100341  255491 fix.go:206] guest clock: 1692311097.042275817
	I0817 22:24:57.100351  255491 fix.go:219] Guest: 2023-08-17 22:24:57.042275817 +0000 UTC Remote: 2023-08-17 22:24:56.975593678 +0000 UTC m=+280.298176937 (delta=66.682139ms)
	I0817 22:24:57.100389  255491 fix.go:190] guest clock delta is within tolerance: 66.682139ms
	I0817 22:24:57.100396  255491 start.go:83] releasing machines lock for "default-k8s-diff-port-321287", held for 23.61286841s
	I0817 22:24:57.100436  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.100813  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:57.104312  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.104719  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.104807  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.105050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105744  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.105949  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:24:57.106081  255491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:24:57.106133  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.106268  255491 ssh_runner.go:195] Run: cat /version.json
	I0817 22:24:57.106395  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:24:57.110145  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110531  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.110577  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.110870  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.111166  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.111352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.111402  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.111567  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.112700  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:57.112751  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:57.112980  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:24:57.113206  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:24:57.113379  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:24:57.113534  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:24:57.200530  255491 ssh_runner.go:195] Run: systemctl --version
	I0817 22:24:57.232758  255491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:24:57.405574  255491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:24:57.413543  255491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:24:57.413637  255491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:24:57.438687  255491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:24:57.438718  255491 start.go:466] detecting cgroup driver to use...
	I0817 22:24:57.438808  255491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:24:57.458572  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:24:57.475320  255491 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:24:57.475397  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:24:57.493585  255491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:24:57.512274  255491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:24:57.650975  255491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:24:57.788299  255491 docker.go:212] disabling docker service ...
	I0817 22:24:57.788395  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:24:57.806350  255491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:24:57.819894  255491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:24:57.966925  255491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:24:58.088274  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:24:58.107210  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:24:58.129691  255491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 22:24:58.129766  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.141217  255491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:24:58.141388  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.153376  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.166177  255491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:24:58.177326  255491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:24:58.191627  255491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:24:58.203913  255491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:24:58.204001  255491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:24:58.222901  255491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:24:58.233280  255491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:24:58.366794  255491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:24:58.603364  255491 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:24:58.603462  255491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:24:58.616285  255491 start.go:534] Will wait 60s for crictl version
	I0817 22:24:58.616397  255491 ssh_runner.go:195] Run: which crictl
	I0817 22:24:58.622933  255491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:24:58.668866  255491 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:24:58.668961  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.735680  255491 ssh_runner.go:195] Run: crio --version
	I0817 22:24:58.800442  255491 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.1 ...
	I0817 22:24:59.550327  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.550367  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:24:59.550385  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:24:59.646890  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:24:59.646928  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:00.147486  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.160700  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.160745  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:00.647077  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:00.685626  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:00.685678  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.147134  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.156042  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:01.156083  255215 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:01.647569  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:25:01.657291  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:25:01.686204  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:01.686260  255215 api_server.go:131] duration metric: took 6.59605111s to wait for apiserver health ...
	I0817 22:25:01.686274  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:25:01.686283  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:01.688856  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:24:58.802321  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetIP
	I0817 22:24:58.806172  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.806661  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:24:58.806696  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:24:58.807029  255491 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0817 22:24:58.813045  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:24:58.830937  255491 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 22:24:58.831008  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:24:58.880355  255491 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.4". assuming images are not preloaded.
	I0817 22:24:58.880469  255491 ssh_runner.go:195] Run: which lz4
	I0817 22:24:58.886729  255491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:24:58.893418  255491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:24:58.893496  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437134076 bytes)
	I0817 22:25:01.093233  255491 crio.go:444] Took 2.206544 seconds to copy over tarball
	I0817 22:25:01.093422  255491 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:24:57.129390  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Start
	I0817 22:24:57.134160  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring networks are active...
	I0817 22:24:57.134190  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network default is active
	I0817 22:24:57.134205  254975 main.go:141] libmachine: (old-k8s-version-294781) Ensuring network mk-old-k8s-version-294781 is active
	I0817 22:24:57.134214  254975 main.go:141] libmachine: (old-k8s-version-294781) Getting domain xml...
	I0817 22:24:57.134228  254975 main.go:141] libmachine: (old-k8s-version-294781) Creating domain...
	I0817 22:24:58.694125  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting to get IP...
	I0817 22:24:58.695714  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:58.696209  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:58.696356  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:58.696219  256493 retry.go:31] will retry after 307.640559ms: waiting for machine to come up
	I0817 22:24:59.006214  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.008497  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.008536  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.006931  256493 retry.go:31] will retry after 316.904618ms: waiting for machine to come up
	I0817 22:24:59.325929  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.326634  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.326672  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.326593  256493 retry.go:31] will retry after 466.068046ms: waiting for machine to come up
	I0817 22:24:59.794718  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:24:59.795268  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:24:59.795294  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:24:59.795200  256493 retry.go:31] will retry after 399.064857ms: waiting for machine to come up
	I0817 22:25:00.196015  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.196733  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.196760  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.196632  256493 retry.go:31] will retry after 553.183294ms: waiting for machine to come up
	I0817 22:25:00.751687  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:00.752341  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:00.752366  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:00.752283  256493 retry.go:31] will retry after 815.149471ms: waiting for machine to come up
	I0817 22:25:01.568847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:01.569679  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:01.569709  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:01.569547  256493 retry.go:31] will retry after 827.38414ms: waiting for machine to come up
	I0817 22:25:01.690788  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:01.726335  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:01.804837  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:01.844074  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:01.844121  255215 system_pods.go:61] "coredns-5d78c9869d-twvdv" [f8305fa5-f0e7-4090-af8f-a9eefe00be65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:01.844134  255215 system_pods.go:61] "etcd-embed-certs-437183" [409212ae-25eb-4221-b380-d73562531eb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:01.844143  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [a378c1e7-c439-427f-b56e-7aeb2397dda2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:01.844149  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [7d8c33ff-f8bd-4ca8-a1cd-7e03a3c1ea55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:01.844156  255215 system_pods.go:61] "kube-proxy-tqlkl" [3dc68d59-da16-4a8e-8664-24c280769e22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:01.844162  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [54addcee-6a78-4a9d-9b15-a02e79ac92be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:01.844169  255215 system_pods.go:61] "metrics-server-74d5c6b9c-h5tt6" [6f8a838b-81d8-444d-aba1-fe46fefe8815] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:01.844175  255215 system_pods.go:61] "storage-provisioner" [65cd2cbe-dcb1-4842-af27-551c8d0a93d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:01.844182  255215 system_pods.go:74] duration metric: took 39.323312ms to wait for pod list to return data ...
	I0817 22:25:01.844194  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:01.857431  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:01.857471  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:01.857485  255215 node_conditions.go:105] duration metric: took 13.285661ms to run NodePressure ...
	I0817 22:25:01.857511  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:02.318085  255215 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329089  255215 kubeadm.go:787] kubelet initialised
	I0817 22:25:02.329122  255215 kubeadm.go:788] duration metric: took 10.998414ms waiting for restarted kubelet to initialise ...
	I0817 22:25:02.329133  255215 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.338233  255215 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:24:59.891549  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.386499  255057 node_ready.go:58] node "no-preload-525875" has status "Ready":"False"
	I0817 22:25:02.889146  255057 node_ready.go:49] node "no-preload-525875" has status "Ready":"True"
	I0817 22:25:02.889193  255057 node_ready.go:38] duration metric: took 7.084075756s waiting for node "no-preload-525875" to be "Ready" ...
	I0817 22:25:02.889209  255057 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:02.915138  255057 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926622  255057 pod_ready.go:92] pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:02.926662  255057 pod_ready.go:81] duration metric: took 11.479543ms waiting for pod "coredns-5dd5756b68-b54g4" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:02.926677  255057 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.597215  255491 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.503742232s)
	I0817 22:25:04.597254  255491 crio.go:451] Took 3.503924 seconds to extract the tarball
	I0817 22:25:04.597269  255491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:04.640799  255491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:04.683452  255491 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 22:25:04.683478  255491 cache_images.go:84] Images are preloaded, skipping loading
	I0817 22:25:04.683564  255491 ssh_runner.go:195] Run: crio config
	I0817 22:25:04.755546  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:04.755579  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:04.755618  255491 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:04.755646  255491 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8444 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-321287 NodeName:default-k8s-diff-port-321287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 22:25:04.755865  255491 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-321287"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:04.755964  255491 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-321287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0817 22:25:04.756040  255491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 22:25:04.768800  255491 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:04.768884  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:04.779179  255491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0817 22:25:04.798848  255491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:04.818088  255491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0817 22:25:04.839021  255491 ssh_runner.go:195] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:04.843996  255491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:04.858954  255491 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287 for IP: 192.168.50.30
	I0817 22:25:04.858992  255491 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:04.859193  255491 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:04.859263  255491 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:04.859371  255491 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/client.key
	I0817 22:25:04.859452  255491 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key.2a920f45
	I0817 22:25:04.859519  255491 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key
	I0817 22:25:04.859673  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:04.859717  255491 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:04.859733  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:04.859766  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:04.859800  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:04.859839  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:04.859901  255491 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:04.860739  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:04.893191  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 22:25:04.923817  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:04.953192  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/default-k8s-diff-port-321287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 22:25:04.985353  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:05.015743  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:05.043565  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:05.072283  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:05.102360  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:05.131090  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:05.158164  255491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:05.183921  255491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:05.201231  255491 ssh_runner.go:195] Run: openssl version
	I0817 22:25:05.207477  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:05.218696  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224473  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.224551  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:05.230753  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:05.244810  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:05.255480  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.260972  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.261054  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:05.267724  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:05.280466  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:05.291975  255491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298403  255491 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.298519  255491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:05.306541  255491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:05.318878  255491 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:05.324755  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:05.333167  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:05.341869  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:05.350173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:05.357173  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:05.364289  255491 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:05.372301  255491 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-321287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-
k8s-diff-port-321287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:05.372435  255491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:05.372493  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:05.409127  255491 cri.go:89] found id: ""
	I0817 22:25:05.409211  255491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:05.420288  255491 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:05.420316  255491 kubeadm.go:636] restartCluster start
	I0817 22:25:05.420401  255491 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:05.431336  255491 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.433035  255491 kubeconfig.go:92] found "default-k8s-diff-port-321287" server: "https://192.168.50.30:8444"
	I0817 22:25:05.437153  255491 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:05.446894  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.446956  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.459319  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.459353  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.459412  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.472543  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:05.973294  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:05.973386  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:05.986474  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.473007  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.473141  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.485870  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:02.398531  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:02.399142  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:02.399174  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:02.399045  256493 retry.go:31] will retry after 1.143040413s: waiting for machine to come up
	I0817 22:25:03.543421  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:03.544040  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:03.544076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:03.543971  256493 retry.go:31] will retry after 1.654291601s: waiting for machine to come up
	I0817 22:25:05.200880  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:05.201405  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:05.201435  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:05.201350  256493 retry.go:31] will retry after 1.752048888s: waiting for machine to come up
	I0817 22:25:04.379203  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.872822  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:04.499009  255057 pod_ready.go:92] pod "etcd-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.499040  255057 pod_ready.go:81] duration metric: took 1.572354603s waiting for pod "etcd-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.499057  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761691  255057 pod_ready.go:92] pod "kube-apiserver-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.761719  255057 pod_ready.go:81] duration metric: took 262.653075ms waiting for pod "kube-apiserver-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.761734  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769937  255057 pod_ready.go:92] pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.769968  255057 pod_ready.go:81] duration metric: took 8.225874ms waiting for pod "kube-controller-manager-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.769983  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881406  255057 pod_ready.go:92] pod "kube-proxy-pzpk2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:04.881444  255057 pod_ready.go:81] duration metric: took 111.452654ms waiting for pod "kube-proxy-pzpk2" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:04.881461  255057 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643623  255057 pod_ready.go:92] pod "kube-scheduler-no-preload-525875" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:05.643648  255057 pod_ready.go:81] duration metric: took 762.178998ms waiting for pod "kube-scheduler-no-preload-525875" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:05.643658  255057 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:07.695130  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:06.972803  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:06.972898  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:06.985259  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.473416  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.473551  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.485378  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:07.973567  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:07.973708  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:07.989454  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.472762  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.472894  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.489910  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:08.972732  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:08.972822  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:08.984958  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.473569  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.473709  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.490412  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:09.972908  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:09.972987  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:09.986072  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.473333  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.473429  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.485656  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:10.973314  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:10.973423  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:10.989391  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:11.472953  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.473077  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.485192  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:06.956350  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:06.956874  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:06.956904  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:06.956830  256493 retry.go:31] will retry after 2.09338178s: waiting for machine to come up
	I0817 22:25:09.052006  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:09.052516  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:09.052549  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:09.052447  256493 retry.go:31] will retry after 3.023234706s: waiting for machine to come up
	I0817 22:25:08.877674  255215 pod_ready.go:102] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:09.370723  255215 pod_ready.go:92] pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:09.370754  255215 pod_ready.go:81] duration metric: took 7.032445075s waiting for pod "coredns-5d78c9869d-twvdv" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:09.370767  255215 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893038  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:10.893076  255215 pod_ready.go:81] duration metric: took 1.522300039s waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.893091  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918300  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:11.918330  255215 pod_ready.go:81] duration metric: took 1.025229003s waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:11.918347  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:10.192198  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:12.692398  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:11.973001  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:11.973083  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:11.984794  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.473426  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.473527  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.489566  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:12.972736  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:12.972840  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:12.984972  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.473572  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.473665  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.485760  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:13.972804  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:13.972952  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:13.984788  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.473423  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.473501  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.484892  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:14.973394  255491 api_server.go:166] Checking apiserver status ...
	I0817 22:25:14.973481  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:14.985492  255491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:15.447933  255491 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:15.447967  255491 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:15.447983  255491 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:15.448044  255491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:15.483471  255491 cri.go:89] found id: ""
	I0817 22:25:15.483596  255491 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:15.500292  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:15.510630  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:15.510695  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520738  255491 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:15.520771  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:15.635683  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:12.079485  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:12.080041  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:12.080069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:12.079986  256493 retry.go:31] will retry after 4.097355523s: waiting for machine to come up
	I0817 22:25:16.178550  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:16.179032  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | unable to find current IP address of domain old-k8s-version-294781 in network mk-old-k8s-version-294781
	I0817 22:25:16.179063  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | I0817 22:25:16.178988  256493 retry.go:31] will retry after 4.178327275s: waiting for machine to come up
	I0817 22:25:14.176089  255215 pod_ready.go:102] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:14.679850  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.679881  255215 pod_ready.go:81] duration metric: took 2.761525031s waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.679894  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685308  255215 pod_ready.go:92] pod "kube-proxy-tqlkl" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.685339  255215 pod_ready.go:81] duration metric: took 5.435708ms waiting for pod "kube-proxy-tqlkl" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.685352  255215 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967073  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:14.967099  255215 pod_ready.go:81] duration metric: took 281.740411ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:14.967110  255215 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:17.277033  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:15.190295  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:17.193522  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:16.723896  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0881723s)
	I0817 22:25:16.723933  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:16.940953  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.025208  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:17.110784  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:17.110880  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.123610  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:17.645363  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.145697  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:18.645211  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.145515  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.645764  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:19.665892  255491 api_server.go:72] duration metric: took 2.555110324s to wait for apiserver process to appear ...
	I0817 22:25:19.665920  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:19.665938  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:20.359726  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360375  254975 main.go:141] libmachine: (old-k8s-version-294781) Found IP for machine: 192.168.72.56
	I0817 22:25:20.360408  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserving static IP address...
	I0817 22:25:20.360426  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has current primary IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.360798  254975 main.go:141] libmachine: (old-k8s-version-294781) Reserved static IP address: 192.168.72.56
	I0817 22:25:20.360843  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.360866  254975 main.go:141] libmachine: (old-k8s-version-294781) Waiting for SSH to be available...
	I0817 22:25:20.360898  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | skip adding static IP to network mk-old-k8s-version-294781 - found existing host DHCP lease matching {name: "old-k8s-version-294781", mac: "52:54:00:8b:be:6b", ip: "192.168.72.56"}
	I0817 22:25:20.360918  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Getting to WaitForSSH function...
	I0817 22:25:20.363319  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.363721  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.363767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.364016  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH client type: external
	I0817 22:25:20.364069  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Using SSH private key: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa (-rw-------)
	I0817 22:25:20.364115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0817 22:25:20.364135  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | About to run SSH command:
	I0817 22:25:20.364175  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | exit 0
	I0817 22:25:20.454327  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | SSH cmd err, output: <nil>: 
	I0817 22:25:20.454772  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetConfigRaw
	I0817 22:25:20.455585  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.458846  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.459420  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.459910  254975 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/config.json ...
	I0817 22:25:20.460207  254975 machine.go:88] provisioning docker machine ...
	I0817 22:25:20.460240  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:20.460489  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460712  254975 buildroot.go:166] provisioning hostname "old-k8s-version-294781"
	I0817 22:25:20.460743  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.460912  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.463811  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464166  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.464216  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.464391  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.464610  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464779  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.464936  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.465157  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.465566  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.465578  254975 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-294781 && echo "old-k8s-version-294781" | sudo tee /etc/hostname
	I0817 22:25:20.604184  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-294781
	
	I0817 22:25:20.604223  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.607313  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.607668  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.607706  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.608091  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.608335  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608511  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.608656  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.608845  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:20.609344  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:20.609368  254975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-294781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-294781/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-294781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 22:25:20.731574  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 22:25:20.731639  254975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16865-203458/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-203458/.minikube}
	I0817 22:25:20.731679  254975 buildroot.go:174] setting up certificates
	I0817 22:25:20.731697  254975 provision.go:83] configureAuth start
	I0817 22:25:20.731717  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetMachineName
	I0817 22:25:20.732057  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:20.735344  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.735748  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.735780  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.736038  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.738896  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739346  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.739384  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.739562  254975 provision.go:138] copyHostCerts
	I0817 22:25:20.739634  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem, removing ...
	I0817 22:25:20.739650  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem
	I0817 22:25:20.739733  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/key.pem (1679 bytes)
	I0817 22:25:20.739875  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem, removing ...
	I0817 22:25:20.739889  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem
	I0817 22:25:20.739921  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/ca.pem (1082 bytes)
	I0817 22:25:20.740027  254975 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem, removing ...
	I0817 22:25:20.740040  254975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem
	I0817 22:25:20.740069  254975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-203458/.minikube/cert.pem (1123 bytes)
	I0817 22:25:20.740159  254975 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-294781 san=[192.168.72.56 192.168.72.56 localhost 127.0.0.1 minikube old-k8s-version-294781]
	I0817 22:25:20.937408  254975 provision.go:172] copyRemoteCerts
	I0817 22:25:20.937480  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 22:25:20.937508  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:20.940609  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941074  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:20.941115  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:20.941294  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:20.941469  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:20.941678  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:20.941899  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.033976  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 22:25:21.062438  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 22:25:21.090325  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 22:25:21.116263  254975 provision.go:86] duration metric: configureAuth took 384.54455ms
	I0817 22:25:21.116295  254975 buildroot.go:189] setting minikube options for container-runtime
	I0817 22:25:21.116550  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:25:21.116667  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.119767  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120295  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.120351  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.120530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.120735  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.120898  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.121114  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.121330  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.121982  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.122011  254975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 22:25:21.449644  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 22:25:21.449675  254975 machine.go:91] provisioned docker machine in 989.449203ms
	I0817 22:25:21.449686  254975 start.go:300] post-start starting for "old-k8s-version-294781" (driver="kvm2")
	I0817 22:25:21.449696  254975 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 22:25:21.449713  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.450065  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 22:25:21.450112  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.453436  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.453847  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.453893  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.454092  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.454320  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.454501  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.454682  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.544501  254975 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 22:25:21.549102  254975 info.go:137] Remote host: Buildroot 2021.02.12
	I0817 22:25:21.549128  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/addons for local assets ...
	I0817 22:25:21.549201  254975 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-203458/.minikube/files for local assets ...
	I0817 22:25:21.549301  254975 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem -> 2106702.pem in /etc/ssl/certs
	I0817 22:25:21.549425  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 22:25:21.559169  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:21.585459  254975 start.go:303] post-start completed in 135.754284ms
	I0817 22:25:21.585496  254975 fix.go:56] fixHost completed within 24.48491231s
	I0817 22:25:21.585531  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.588650  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589045  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.589076  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.589236  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.589445  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589638  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.589810  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.590026  254975 main.go:141] libmachine: Using SSH client type: native
	I0817 22:25:21.590596  254975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 192.168.72.56 22 <nil> <nil>}
	I0817 22:25:21.590621  254975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0817 22:25:21.704138  254975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692311121.622295369
	
	I0817 22:25:21.704162  254975 fix.go:206] guest clock: 1692311121.622295369
	I0817 22:25:21.704170  254975 fix.go:219] Guest: 2023-08-17 22:25:21.622295369 +0000 UTC Remote: 2023-08-17 22:25:21.585502401 +0000 UTC m=+364.810906249 (delta=36.792968ms)
	I0817 22:25:21.704193  254975 fix.go:190] guest clock delta is within tolerance: 36.792968ms
	I0817 22:25:21.704200  254975 start.go:83] releasing machines lock for "old-k8s-version-294781", held for 24.603659499s
	I0817 22:25:21.704228  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.704524  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:21.707198  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707512  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.707555  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.707715  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708285  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708516  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:25:21.708605  254975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 22:25:21.708670  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.708790  254975 ssh_runner.go:195] Run: cat /version.json
	I0817 22:25:21.708816  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:25:21.711462  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711744  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.711858  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.711906  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712090  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712154  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:21.712219  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:21.712326  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712347  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:25:21.712539  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712541  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:25:21.712749  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:25:21.712766  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:21.712936  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:25:19.775731  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.777036  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:19.693695  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:22.189616  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:21.818518  254975 ssh_runner.go:195] Run: systemctl --version
	I0817 22:25:21.824498  254975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 22:25:21.971461  254975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0817 22:25:21.978188  254975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0817 22:25:21.978271  254975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 22:25:21.993704  254975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 22:25:21.993738  254975 start.go:466] detecting cgroup driver to use...
	I0817 22:25:21.993820  254975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 22:25:22.009074  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 22:25:22.022874  254975 docker.go:196] disabling cri-docker service (if available) ...
	I0817 22:25:22.022935  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 22:25:22.036508  254975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 22:25:22.050919  254975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 22:25:22.174894  254975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 22:25:22.307776  254975 docker.go:212] disabling docker service ...
	I0817 22:25:22.307863  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 22:25:22.322017  254975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 22:25:22.334550  254975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 22:25:22.439721  254975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 22:25:22.554591  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 22:25:22.570460  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 22:25:22.588685  254975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0817 22:25:22.588767  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.599716  254975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 22:25:22.599801  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.611990  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.623873  254975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 22:25:22.636093  254975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 22:25:22.647438  254975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 22:25:22.657266  254975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0817 22:25:22.657338  254975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0817 22:25:22.672463  254975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 22:25:22.683508  254975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 22:25:22.799912  254975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 22:25:22.995704  254975 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 22:25:22.995816  254975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 22:25:23.003199  254975 start.go:534] Will wait 60s for crictl version
	I0817 22:25:23.003280  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:23.008350  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 22:25:23.042651  254975 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0817 22:25:23.042763  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.093624  254975 ssh_runner.go:195] Run: crio --version
	I0817 22:25:23.142140  254975 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0817 22:25:24.666188  255491 api_server.go:269] stopped: https://192.168.50.30:8444/healthz: Get "https://192.168.50.30:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:24.666264  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:24.903729  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:24.903775  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:25.404125  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.420215  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.420261  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:25.903943  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:25.914463  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0817 22:25:25.914514  255491 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0817 22:25:26.403966  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:25:26.414021  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:25:26.437708  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:25:26.437750  255491 api_server.go:131] duration metric: took 6.771821605s to wait for apiserver health ...
	I0817 22:25:26.437779  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:25:26.437789  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:26.440095  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:26.441921  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:26.469640  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:26.514785  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:26.532553  255491 system_pods.go:59] 8 kube-system pods found
	I0817 22:25:26.532616  255491 system_pods.go:61] "coredns-5d78c9869d-v74x9" [1c42e9be-16fa-47c2-ab04-9ec805320760] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:25:26.532631  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [a3655572-9d89-4ef6-85db-85dc454d1021] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 22:25:26.532659  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [6786ac16-78df-4909-8542-0952af5beff6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 22:25:26.532675  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [ac8085d0-db9c-4229-b816-4753b7cfcae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 22:25:26.532686  255491 system_pods.go:61] "kube-proxy-4d9dx" [22447888-6570-47b7-baac-a5842688de9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 22:25:26.532697  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [bfcfc726-e659-4cb9-ad36-9887ddfaf170] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 22:25:26.532713  255491 system_pods.go:61] "metrics-server-74d5c6b9c-25l6w" [205dcf88-9d10-416b-8fd0-c93939208c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:25:26.532722  255491 system_pods.go:61] "storage-provisioner" [be486251-ebb9-4d0b-85c9-fe04e76634e3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 22:25:26.532738  255491 system_pods.go:74] duration metric: took 17.92531ms to wait for pod list to return data ...
	I0817 22:25:26.532751  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:26.541133  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:26.541180  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:26.541197  255491 node_conditions.go:105] duration metric: took 8.431415ms to run NodePressure ...
	I0817 22:25:26.541228  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:23.143729  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetIP
	I0817 22:25:23.146678  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147145  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:25:23.147178  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:25:23.147433  254975 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0817 22:25:23.151860  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:23.165714  254975 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 22:25:23.165805  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:23.207234  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:23.207334  254975 ssh_runner.go:195] Run: which lz4
	I0817 22:25:23.211497  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 22:25:23.216272  254975 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 22:25:23.216309  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0817 22:25:25.170164  254975 crio.go:444] Took 1.958697 seconds to copy over tarball
	I0817 22:25:25.170253  254975 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 22:25:23.792764  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.276276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:24.193719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.692837  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:26.873863  255491 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:26.878982  255491 kubeadm.go:787] kubelet initialised
	I0817 22:25:26.879005  255491 kubeadm.go:788] duration metric: took 5.10797ms waiting for restarted kubelet to initialise ...
	I0817 22:25:26.879014  255491 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:25:26.885772  255491 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:29.448692  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:28.464409  254975 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.294096057s)
	I0817 22:25:28.464448  254975 crio.go:451] Took 3.294247 seconds to extract the tarball
	I0817 22:25:28.464461  254975 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 22:25:28.505546  254975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 22:25:28.550245  254975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0817 22:25:28.550282  254975 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 22:25:28.550393  254975 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.550419  254975 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.550425  254975 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.550466  254975 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.550416  254975 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.550388  254975 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.550543  254975 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0817 22:25:28.550382  254975 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551670  254975 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.551673  254975 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.551765  254975 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.551779  254975 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.551793  254975 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0817 22:25:28.551814  254975 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.551841  254975 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.552852  254975 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.736900  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.746950  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.747215  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.749256  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.754813  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0817 22:25:28.767639  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.778459  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.834796  254975 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:25:28.845176  254975 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0817 22:25:28.845233  254975 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:28.845295  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.896784  254975 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0817 22:25:28.896843  254975 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:28.896901  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919129  254975 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0817 22:25:28.919247  254975 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:28.919192  254975 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0817 22:25:28.919301  254975 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0817 22:25:28.919320  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.919332  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972779  254975 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0817 22:25:28.972831  254975 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0817 22:25:28.972863  254975 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0817 22:25:28.972898  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.972901  254975 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:28.973013  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:28.986909  254975 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0817 22:25:28.986957  254975 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:28.987007  254975 ssh_runner.go:195] Run: which crictl
	I0817 22:25:29.083047  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0817 22:25:29.083137  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0817 22:25:29.083204  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0817 22:25:29.083276  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0817 22:25:29.083227  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0817 22:25:29.083354  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0817 22:25:29.083408  254975 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0817 22:25:29.214678  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0817 22:25:29.214743  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0817 22:25:29.214777  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0817 22:25:29.214847  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0817 22:25:29.214934  254975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.221086  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0817 22:25:29.221101  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0817 22:25:29.221162  254975 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0817 22:25:29.223655  254975 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0817 22:25:29.223684  254975 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0817 22:25:29.223753  254975 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0817 22:25:30.774685  254975 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.550895846s)
	I0817 22:25:30.774722  254975 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0817 22:25:30.774776  254975 cache_images.go:92] LoadImages completed in 2.224475745s
	W0817 22:25:30.774942  254975 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16865-203458/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0817 22:25:30.775051  254975 ssh_runner.go:195] Run: crio config
	I0817 22:25:30.840592  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:30.840623  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:30.840650  254975 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 22:25:30.840680  254975 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.56 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-294781 NodeName:old-k8s-version-294781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0817 22:25:30.840917  254975 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-294781"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-294781
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.56:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 22:25:30.841030  254975 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-294781 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 22:25:30.841111  254975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0817 22:25:30.850719  254975 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 22:25:30.850818  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 22:25:30.862807  254975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0817 22:25:30.882111  254975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 22:25:30.900496  254975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0817 22:25:30.921163  254975 ssh_runner.go:195] Run: grep 192.168.72.56	control-plane.minikube.internal$ /etc/hosts
	I0817 22:25:30.925789  254975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 22:25:30.941284  254975 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781 for IP: 192.168.72.56
	I0817 22:25:30.941335  254975 certs.go:190] acquiring lock for shared ca certs: {Name:mkdd65d82723b771723ae611915b68242dd4c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:25:30.941556  254975 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key
	I0817 22:25:30.941617  254975 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key
	I0817 22:25:30.941728  254975 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/client.key
	I0817 22:25:30.941792  254975 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key.aa8f9bd0
	I0817 22:25:30.941827  254975 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key
	I0817 22:25:30.941948  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem (1338 bytes)
	W0817 22:25:30.941994  254975 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670_empty.pem, impossibly tiny 0 bytes
	I0817 22:25:30.942005  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 22:25:30.942039  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/ca.pem (1082 bytes)
	I0817 22:25:30.942107  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/cert.pem (1123 bytes)
	I0817 22:25:30.942141  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/certs/home/jenkins/minikube-integration/16865-203458/.minikube/certs/key.pem (1679 bytes)
	I0817 22:25:30.942200  254975 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem (1708 bytes)
	I0817 22:25:30.942953  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 22:25:30.973814  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 22:25:31.003939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 22:25:31.035137  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/old-k8s-version-294781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 22:25:31.063172  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 22:25:31.092059  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 22:25:31.120881  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 22:25:31.148113  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 22:25:31.175102  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 22:25:31.204939  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/certs/210670.pem --> /usr/share/ca-certificates/210670.pem (1338 bytes)
	I0817 22:25:31.231548  254975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/ssl/certs/2106702.pem --> /usr/share/ca-certificates/2106702.pem (1708 bytes)
	I0817 22:25:31.263908  254975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 22:25:31.287143  254975 ssh_runner.go:195] Run: openssl version
	I0817 22:25:31.293380  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 22:25:31.307058  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313520  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.313597  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 22:25:31.321182  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 22:25:31.332412  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/210670.pem && ln -fs /usr/share/ca-certificates/210670.pem /etc/ssl/certs/210670.pem"
	I0817 22:25:31.343318  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.348972  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:19 /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.349044  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/210670.pem
	I0817 22:25:31.355568  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/210670.pem /etc/ssl/certs/51391683.0"
	I0817 22:25:31.366257  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2106702.pem && ln -fs /usr/share/ca-certificates/2106702.pem /etc/ssl/certs/2106702.pem"
	I0817 22:25:31.376489  254975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382818  254975 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:19 /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.382919  254975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2106702.pem
	I0817 22:25:31.390171  254975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2106702.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 22:25:31.400360  254975 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 22:25:31.406177  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 22:25:31.413881  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 22:25:31.422198  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 22:25:31.429468  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 22:25:31.437072  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 22:25:31.444150  254975 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 22:25:31.450952  254975 kubeadm.go:404] StartCluster: {Name:old-k8s-version-294781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version
-294781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 22:25:31.451064  254975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 22:25:31.451140  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:31.489009  254975 cri.go:89] found id: ""
	I0817 22:25:31.489098  254975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 22:25:31.499098  254975 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 22:25:31.499126  254975 kubeadm.go:636] restartCluster start
	I0817 22:25:31.499191  254975 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 22:25:31.510909  254975 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.512049  254975 kubeconfig.go:92] found "old-k8s-version-294781" server: "https://192.168.72.56:8443"
	I0817 22:25:31.514634  254975 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 22:25:31.525968  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.526039  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.539397  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:31.539423  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:31.539485  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:31.552492  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:28.276789  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:30.406349  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:29.190524  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.195732  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:31.919929  255491 pod_ready.go:102] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.415784  255491 pod_ready.go:92] pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:32.415817  255491 pod_ready.go:81] duration metric: took 5.530013816s waiting for pod "coredns-5d78c9869d-v74x9" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:32.415840  255491 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:34.435177  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.435405  255491 pod_ready.go:102] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:32.053512  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.053604  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.065409  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.553555  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:32.553647  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:32.566402  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.052703  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.052785  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.069027  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:33.552583  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:33.552724  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:33.566692  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.053418  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.053493  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.065794  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:34.553389  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:34.553490  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:34.566130  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.052663  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.052753  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.065276  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:35.553446  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:35.553544  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:35.567754  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.053326  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.053407  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.066562  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:36.553098  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:36.553200  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:36.564869  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:32.777224  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:35.273781  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.276847  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:33.690890  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:36.190746  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.435673  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.435712  255491 pod_ready.go:81] duration metric: took 5.019858859s waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.435724  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441582  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.441602  255491 pod_ready.go:81] duration metric: took 5.870633ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.441614  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448615  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.448643  255491 pod_ready.go:81] duration metric: took 7.021551ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.448656  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454742  255491 pod_ready.go:92] pod "kube-proxy-4d9dx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.454768  255491 pod_ready.go:81] duration metric: took 6.104572ms waiting for pod "kube-proxy-4d9dx" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.454780  255491 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462598  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:25:37.462623  255491 pod_ready.go:81] duration metric: took 7.834341ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:37.462637  255491 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	I0817 22:25:39.741207  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:37.053213  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.053363  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.065752  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:37.553604  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:37.553709  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:37.569278  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.052848  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.052956  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.065011  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:38.552809  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:38.552915  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:38.564702  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.053287  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.053378  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.065004  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:39.553557  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:39.553654  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:39.565776  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.053269  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.053352  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.065089  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:40.552595  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:40.552718  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:40.564921  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.053531  254975 api_server.go:166] Checking apiserver status ...
	I0817 22:25:41.053617  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 22:25:41.065803  254975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 22:25:41.526724  254975 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0817 22:25:41.526774  254975 kubeadm.go:1128] stopping kube-system containers ...
	I0817 22:25:41.526788  254975 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0817 22:25:41.526858  254975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 22:25:41.560831  254975 cri.go:89] found id: ""
	I0817 22:25:41.560931  254975 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 22:25:41.577926  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:25:41.587081  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:25:41.587169  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596656  254975 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 22:25:41.596690  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:41.716908  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:39.776178  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.275946  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:38.193834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:40.691324  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.692667  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:41.745307  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:44.242440  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.243469  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:42.840419  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.123468828s)
	I0817 22:25:42.840454  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.062568  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.150374  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:43.265948  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:25:43.266043  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.284133  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:43.804512  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.304041  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.803961  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:25:44.828050  254975 api_server.go:72] duration metric: took 1.562100837s to wait for apiserver process to appear ...
	I0817 22:25:44.828085  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:25:44.828102  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.828570  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:44.828611  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.829005  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": dial tcp 192.168.72.56:8443: connect: connection refused
	I0817 22:25:45.329868  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:44.276477  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:46.775206  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:45.189460  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:47.690349  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:48.741121  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.742231  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:50.330553  254975 api_server.go:269] stopped: https://192.168.72.56:8443/healthz: Get "https://192.168.72.56:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 22:25:50.330619  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.714219  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.714253  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:51.714268  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.756012  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 22:25:51.756052  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 22:25:49.276427  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.775567  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:49.698834  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:52.190711  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:51.829442  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:51.888999  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:51.889031  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.329747  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.337398  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.337432  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:52.829817  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:52.839157  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 22:25:52.839187  254975 api_server.go:103] status: https://192.168.72.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 22:25:53.329580  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:25:53.336858  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:25:53.347151  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:25:53.347191  254975 api_server.go:131] duration metric: took 8.519097199s to wait for apiserver health ...
	I0817 22:25:53.347204  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:25:53.347212  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:25:53.349243  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:25:52.743242  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:55.241261  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:53.350976  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:25:53.364808  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:25:53.397606  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:25:53.411868  254975 system_pods.go:59] 7 kube-system pods found
	I0817 22:25:53.411903  254975 system_pods.go:61] "coredns-5644d7b6d9-nz5d2" [5514f434-2c17-42dc-b35b-fef5bd6886fb] Running
	I0817 22:25:53.411909  254975 system_pods.go:61] "etcd-old-k8s-version-294781" [75919c29-02ae-46f6-8173-507b491d16da] Running
	I0817 22:25:53.411920  254975 system_pods.go:61] "kube-apiserver-old-k8s-version-294781" [f6d458ca-a84f-40dc-8b6a-b53fb8062c50] Running
	I0817 22:25:53.411930  254975 system_pods.go:61] "kube-controller-manager-old-k8s-version-294781" [0827f676-c11c-44b1-9bca-f8f905448490] Pending
	I0817 22:25:53.411937  254975 system_pods.go:61] "kube-proxy-f2bdh" [8b0dfe14-026a-44e1-9c6f-7f16fb61f90e] Running
	I0817 22:25:53.411943  254975 system_pods.go:61] "kube-scheduler-old-k8s-version-294781" [9ced2a30-44a8-421f-94ef-19be20b58c5d] Running
	I0817 22:25:53.411947  254975 system_pods.go:61] "storage-provisioner" [c9c05cca-5426-4071-a408-815c723a76f3] Running
	I0817 22:25:53.411954  254975 system_pods.go:74] duration metric: took 14.318728ms to wait for pod list to return data ...
	I0817 22:25:53.411961  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:25:53.415672  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:25:53.415715  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:25:53.415731  254975 node_conditions.go:105] duration metric: took 3.76549ms to run NodePressure ...
	I0817 22:25:53.415758  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 22:25:53.808911  254975 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 22:25:53.814276  254975 retry.go:31] will retry after 200.301174ms: kubelet not initialised
	I0817 22:25:54.020423  254975 retry.go:31] will retry after 376.047728ms: kubelet not initialised
	I0817 22:25:54.401967  254975 retry.go:31] will retry after 672.586884ms: kubelet not initialised
	I0817 22:25:55.079229  254975 retry.go:31] will retry after 1.101994757s: kubelet not initialised
	I0817 22:25:56.186236  254975 retry.go:31] will retry after 770.380926ms: kubelet not initialised
	I0817 22:25:53.777865  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.275799  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:54.690880  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.189416  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:57.242279  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.742604  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:56.961679  254975 retry.go:31] will retry after 2.235217601s: kubelet not initialised
	I0817 22:25:59.205012  254975 retry.go:31] will retry after 2.063266757s: kubelet not initialised
	I0817 22:26:01.275712  254975 retry.go:31] will retry after 5.105867057s: kubelet not initialised
	I0817 22:25:58.774815  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.275856  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:25:59.190180  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.692286  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:01.744707  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.240683  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.388158  254975 retry.go:31] will retry after 3.608427827s: kubelet not initialised
	I0817 22:26:03.775281  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.274839  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:04.190713  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.689980  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:06.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.742399  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.742739  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.004038  254975 retry.go:31] will retry after 8.940252852s: kubelet not initialised
	I0817 22:26:08.275499  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:10.275871  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:08.696436  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:11.189718  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.240363  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.241894  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:12.776238  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:15.274945  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:13.690119  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:16.189786  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:17.741982  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:20.242289  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.951040  254975 retry.go:31] will retry after 14.553103306s: kubelet not initialised
	I0817 22:26:17.774269  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:19.775075  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.274390  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:18.690720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:21.191013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:22.242355  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.742592  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:24.275310  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:26.774906  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:23.690032  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:25.690127  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.692342  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:27.243421  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:29.245714  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:28.777378  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.274134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:30.189730  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:32.689849  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:31.741791  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.240900  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:36.241988  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:33.521718  254975 kubeadm.go:787] kubelet initialised
	I0817 22:26:33.521745  254975 kubeadm.go:788] duration metric: took 39.712803989s waiting for restarted kubelet to initialise ...
	I0817 22:26:33.521755  254975 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:26:33.535522  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545447  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.545474  254975 pod_ready.go:81] duration metric: took 9.918514ms waiting for pod "coredns-5644d7b6d9-78ltr" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.545487  254975 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551823  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.551853  254975 pod_ready.go:81] duration metric: took 6.357251ms waiting for pod "coredns-5644d7b6d9-nz5d2" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.551867  254975 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559246  254975 pod_ready.go:92] pod "etcd-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.559278  254975 pod_ready.go:81] duration metric: took 7.402957ms waiting for pod "etcd-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.559291  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565344  254975 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.565373  254975 pod_ready.go:81] duration metric: took 6.072723ms waiting for pod "kube-apiserver-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.565387  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909036  254975 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:33.909073  254975 pod_ready.go:81] duration metric: took 343.677116ms waiting for pod "kube-controller-manager-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.909089  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308592  254975 pod_ready.go:92] pod "kube-proxy-f2bdh" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.308619  254975 pod_ready.go:81] duration metric: took 399.522419ms waiting for pod "kube-proxy-f2bdh" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.308630  254975 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708489  254975 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace has status "Ready":"True"
	I0817 22:26:34.708517  254975 pod_ready.go:81] duration metric: took 399.879822ms waiting for pod "kube-scheduler-old-k8s-version-294781" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:34.708528  254975 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	I0817 22:26:33.275646  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:35.774730  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:34.692013  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.191914  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.242929  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.741450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:37.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.516268  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:38.275712  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:40.774133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:39.690461  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:41.690828  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.242204  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.741216  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:42.016209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.516019  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:43.275668  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:45.776837  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:44.189846  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:46.691439  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.742285  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.241123  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:47.016817  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.517406  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:48.276244  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:50.774977  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:49.189105  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:51.190270  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.241800  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.739978  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:52.016631  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:54.515565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.516890  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.274258  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.278000  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:53.192619  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:55.693990  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:56.742737  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.241115  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.241654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.015461  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.017347  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:57.775264  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:59.775399  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:01.776382  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:26:58.190121  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:00.190792  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:02.697428  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.741654  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.742940  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:03.516565  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.516966  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:04.275212  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:06.277355  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:05.190366  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:07.190973  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.244485  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.741985  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.015202  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.016691  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:08.774384  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:10.774729  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:09.692011  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.190853  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.742313  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:15.241577  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.514881  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.516950  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.517383  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:12.774867  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.775482  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.274793  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:14.689813  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:16.692012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:17.243159  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.741811  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.517518  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.016576  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:19.275829  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.276653  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:18.692315  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:21.189564  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:22.240740  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:24.241960  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.242201  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.017348  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.515756  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.775957  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:26.275937  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:23.189646  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:25.690338  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.690947  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.741912  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.742165  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:27.516071  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.517838  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:28.276630  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:30.775134  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:29.691012  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:31.696187  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:33.241142  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:35.243536  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.017452  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.515974  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.516450  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:32.775448  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.775822  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.274968  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:34.188369  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:36.188928  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:37.741436  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.741983  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.015982  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.516526  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:39.278879  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:41.774782  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:38.189378  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:40.695851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:42.240995  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.741178  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:44.015737  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.018254  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.776276  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.276133  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:43.188678  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:45.189618  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:47.191825  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:46.741669  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.241194  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.242571  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.516687  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.016735  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:48.277486  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:50.775420  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:49.689852  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:51.691216  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.741209  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.743232  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.518209  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.016075  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.275443  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:55.774204  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:53.692276  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:56.190072  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.242009  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:00.242183  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.516449  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.016290  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:57.775327  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:59.775642  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.275827  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:27:58.691467  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:01.189998  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:02.740875  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.742481  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.523305  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.016025  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:04.275917  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:06.777604  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:03.190940  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:05.690559  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.693124  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:07.241721  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.241889  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:08.017490  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.018815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:09.274176  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.275009  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:10.190851  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.689465  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:11.741056  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.241846  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:16.243898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:12.516550  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.017547  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:13.276368  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:15.773960  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:14.690587  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.189824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:18.742657  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.243561  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.515978  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:20.016035  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:17.774474  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.776240  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.275209  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:19.194335  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:21.691142  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:23.743251  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.241450  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:22.021055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.516645  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.776861  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.274029  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:24.189740  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:26.691801  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:28.242364  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:30.740610  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:27.016851  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.017289  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.517096  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.774126  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.275287  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:29.189744  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:31.691190  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:32.741643  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:35.242108  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.015792  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.016247  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:34.773849  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.777072  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:33.692774  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:36.189115  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:37.741756  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.244685  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.016815  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.017616  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:39.276756  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:41.774190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:38.190001  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:40.690824  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.742547  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.241354  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:42.518073  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.016560  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.776627  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:46.275092  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:43.189166  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:45.692178  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.697772  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.242829  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.741555  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:47.516429  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:49.516588  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:48.775347  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:51.274069  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:50.191415  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.694362  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.242367  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.742705  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:52.019113  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:54.516748  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:53.275190  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.773511  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:55.189720  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.189811  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.241152  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.242170  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.015866  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.016464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.515901  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:57.776667  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:00.273941  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:28:59.190719  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.190988  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:01.741107  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.742524  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.243093  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.516444  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:06.017964  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:02.775583  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.280071  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:03.690586  255057 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:05.643882  255057 pod_ready.go:81] duration metric: took 4m0.000182343s waiting for pod "metrics-server-57f55c9bc5-25p7z" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:05.643921  255057 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:05.643932  255057 pod_ready.go:38] duration metric: took 4m2.754707603s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:05.643956  255057 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:29:05.643998  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:05.644060  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:05.703194  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:05.703221  255057 cri.go:89] found id: ""
	I0817 22:29:05.703229  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:05.703283  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.708602  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:05.708676  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:05.747581  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:05.747610  255057 cri.go:89] found id: ""
	I0817 22:29:05.747619  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:05.747692  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.753231  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:05.753331  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:05.795460  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:05.795489  255057 cri.go:89] found id: ""
	I0817 22:29:05.795499  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:05.795562  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.801181  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:05.801268  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:05.840433  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:05.840463  255057 cri.go:89] found id: ""
	I0817 22:29:05.840472  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:05.840546  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.845974  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:05.846039  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:05.886216  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:05.886243  255057 cri.go:89] found id: ""
	I0817 22:29:05.886252  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:05.886314  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.891204  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:05.891286  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:05.927636  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:05.927661  255057 cri.go:89] found id: ""
	I0817 22:29:05.927669  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:05.927732  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:05.932173  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:05.932230  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:05.963603  255057 cri.go:89] found id: ""
	I0817 22:29:05.963634  255057 logs.go:284] 0 containers: []
	W0817 22:29:05.963646  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:05.963654  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:05.963727  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:05.996465  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:05.996489  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:05.996496  255057 cri.go:89] found id: ""
	I0817 22:29:05.996505  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:05.996572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.001291  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:06.006314  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:06.006348  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:06.051348  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:06.051386  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:06.226315  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:06.226362  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:06.263289  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:06.263321  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:06.308223  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:06.308262  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:06.346964  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:06.347001  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:06.382834  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:06.382878  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:06.431491  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:06.431527  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:06.485901  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:06.485948  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:07.054256  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:07.054315  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:07.093229  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093417  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093570  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.093737  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.119377  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:07.119420  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:07.137712  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:07.137756  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:07.187463  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:07.187511  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:07.252728  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252775  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:07.252844  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:07.252856  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252865  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252872  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:07.252878  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:07.252884  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:07.252890  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:08.741270  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:11.245029  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:08.516388  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:10.518542  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:07.775391  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:09.775841  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:12.276748  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.741788  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:16.242264  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:13.018983  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:15.516221  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.774832  255215 pod_ready.go:102] pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:14.967926  255215 pod_ready.go:81] duration metric: took 4m0.000797383s waiting for pod "metrics-server-74d5c6b9c-h5tt6" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:14.967968  255215 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:14.967995  255215 pod_ready.go:38] duration metric: took 4m12.638851973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:14.968025  255215 kubeadm.go:640] restartCluster took 4m34.07416066s
	W0817 22:29:14.968112  255215 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:14.968150  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:17.254245  255057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:29:17.278452  255057 api_server.go:72] duration metric: took 4m21.775005609s to wait for apiserver process to appear ...
	I0817 22:29:17.278488  255057 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:29:17.278540  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:17.278675  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:17.317529  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:17.317554  255057 cri.go:89] found id: ""
	I0817 22:29:17.317562  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:17.317626  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.323505  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:17.323593  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:17.367258  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.367282  255057 cri.go:89] found id: ""
	I0817 22:29:17.367290  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:17.367355  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.372332  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:17.372424  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:17.406884  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:17.406914  255057 cri.go:89] found id: ""
	I0817 22:29:17.406923  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:17.406990  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.411562  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:17.411626  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:17.452516  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.452549  255057 cri.go:89] found id: ""
	I0817 22:29:17.452560  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:17.452654  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.458237  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:17.458327  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:17.498524  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:17.498550  255057 cri.go:89] found id: ""
	I0817 22:29:17.498559  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:17.498621  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.504941  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:17.505024  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:17.543542  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.543570  255057 cri.go:89] found id: ""
	I0817 22:29:17.543580  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:17.543646  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.548420  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:17.548488  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:17.589411  255057 cri.go:89] found id: ""
	I0817 22:29:17.589441  255057 logs.go:284] 0 containers: []
	W0817 22:29:17.589449  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:17.589455  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:17.589520  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:17.624044  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:17.624075  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.624083  255057 cri.go:89] found id: ""
	I0817 22:29:17.624092  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:17.624160  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.631040  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:17.635336  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:17.635359  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:17.688966  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689294  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689576  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:17.689899  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:17.729861  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:17.729923  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:17.746619  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:17.746663  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:17.805149  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:17.805198  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:17.842639  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:17.842673  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:17.905357  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:17.905406  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:17.943860  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:17.943893  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:18.242331  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:20.742262  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:17.517585  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:19.519464  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:18.114000  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:18.114038  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:18.176549  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:18.176602  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:18.211903  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:18.211947  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:18.246566  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:18.246600  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:18.280810  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:18.280853  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:18.831902  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:18.831957  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:18.883170  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883219  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:18.883304  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:18.883323  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883336  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883352  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:18.883364  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:18.883382  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:18.883391  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:23.242587  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:25.742126  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:22.017269  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:24.017806  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:26.516458  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.241489  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:30.741723  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.516703  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:31.016367  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:28.884252  255057 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8443/healthz ...
	I0817 22:29:28.889957  255057 api_server.go:279] https://192.168.61.196:8443/healthz returned 200:
	ok
	I0817 22:29:28.891532  255057 api_server.go:141] control plane version: v1.28.0-rc.1
	I0817 22:29:28.891560  255057 api_server.go:131] duration metric: took 11.613062869s to wait for apiserver health ...
	I0817 22:29:28.891571  255057 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:29:28.891602  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0817 22:29:28.891669  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 22:29:28.927462  255057 cri.go:89] found id: "c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:28.927496  255057 cri.go:89] found id: ""
	I0817 22:29:28.927506  255057 logs.go:284] 1 containers: [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5]
	I0817 22:29:28.927572  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.932195  255057 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0817 22:29:28.932284  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 22:29:28.974041  255057 cri.go:89] found id: "07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:28.974092  255057 cri.go:89] found id: ""
	I0817 22:29:28.974103  255057 logs.go:284] 1 containers: [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992]
	I0817 22:29:28.974172  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:28.978230  255057 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0817 22:29:28.978302  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 22:29:29.012431  255057 cri.go:89] found id: "4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.012459  255057 cri.go:89] found id: ""
	I0817 22:29:29.012469  255057 logs.go:284] 1 containers: [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18]
	I0817 22:29:29.012539  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.017232  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0817 22:29:29.017311  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 22:29:29.051208  255057 cri.go:89] found id: "291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.051235  255057 cri.go:89] found id: ""
	I0817 22:29:29.051242  255057 logs.go:284] 1 containers: [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17]
	I0817 22:29:29.051292  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.056125  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0817 22:29:29.056193  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 22:29:29.094165  255057 cri.go:89] found id: "d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.094196  255057 cri.go:89] found id: ""
	I0817 22:29:29.094207  255057 logs.go:284] 1 containers: [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405]
	I0817 22:29:29.094277  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.098992  255057 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 22:29:29.099054  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 22:29:29.138522  255057 cri.go:89] found id: "8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.138552  255057 cri.go:89] found id: ""
	I0817 22:29:29.138561  255057 logs.go:284] 1 containers: [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb]
	I0817 22:29:29.138614  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.143075  255057 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0817 22:29:29.143159  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0817 22:29:29.177797  255057 cri.go:89] found id: ""
	I0817 22:29:29.177831  255057 logs.go:284] 0 containers: []
	W0817 22:29:29.177842  255057 logs.go:286] No container was found matching "kindnet"
	I0817 22:29:29.177850  255057 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0817 22:29:29.177916  255057 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 22:29:29.208897  255057 cri.go:89] found id: "5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.208922  255057 cri.go:89] found id: "659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.208928  255057 cri.go:89] found id: ""
	I0817 22:29:29.208937  255057 logs.go:284] 2 containers: [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d]
	I0817 22:29:29.209008  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.213083  255057 ssh_runner.go:195] Run: which crictl
	I0817 22:29:29.217020  255057 logs.go:123] Gathering logs for kubelet ...
	I0817 22:29:29.217043  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 22:29:29.253559  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253779  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.253989  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:29.254225  255057 logs.go:138] Found kubelet problem: Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:29.280705  255057 logs.go:123] Gathering logs for dmesg ...
	I0817 22:29:29.280746  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 22:29:29.295400  255057 logs.go:123] Gathering logs for etcd [07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992] ...
	I0817 22:29:29.295432  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07f7152c064dca3eb03f9f60cc10edd679bb105247d68885aa71789fbc4f2992"
	I0817 22:29:29.344222  255057 logs.go:123] Gathering logs for describe nodes ...
	I0817 22:29:29.344268  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 22:29:29.482768  255057 logs.go:123] Gathering logs for kube-apiserver [c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5] ...
	I0817 22:29:29.482812  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3d45374a533da4120d7453a65f7343f63ea73a46e7bf368d22f51728527b8a5"
	I0817 22:29:29.541274  255057 logs.go:123] Gathering logs for coredns [4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18] ...
	I0817 22:29:29.541317  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2d6d0a0e6718257378ddb6183f7c5092be9d79a467f062114168f219612a18"
	I0817 22:29:29.577842  255057 logs.go:123] Gathering logs for kube-scheduler [291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17] ...
	I0817 22:29:29.577876  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 291d84856ee9aea69a019c87e5888771fed84e69119de49bc633419dce948c17"
	I0817 22:29:29.613556  255057 logs.go:123] Gathering logs for kube-proxy [d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405] ...
	I0817 22:29:29.613595  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5071416ecfc1b7a46d665db579cf9809d48f54d08bd87282977bf71438b9405"
	I0817 22:29:29.654840  255057 logs.go:123] Gathering logs for kube-controller-manager [8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb] ...
	I0817 22:29:29.654886  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecbcee30abd9bf98846f8f484d44538a4c4d64fbfe30f8e6b784b0d54d545cb"
	I0817 22:29:29.711929  255057 logs.go:123] Gathering logs for storage-provisioner [5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e] ...
	I0817 22:29:29.711974  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e92f33147487c3def37b5c0bbea3559efbbd3e7bb0b4df6bc90f85d65e38f6e"
	I0817 22:29:29.749746  255057 logs.go:123] Gathering logs for storage-provisioner [659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d] ...
	I0817 22:29:29.749802  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 659e02540293f74e956357932604b45e9cf015aa7dc1fb486efa91c7909e408d"
	I0817 22:29:29.782899  255057 logs.go:123] Gathering logs for CRI-O ...
	I0817 22:29:29.782932  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0817 22:29:30.286425  255057 logs.go:123] Gathering logs for container status ...
	I0817 22:29:30.286488  255057 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 22:29:30.328588  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328616  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0817 22:29:30.328686  255057 out.go:239] X Problems detected in kubelet:
	W0817 22:29:30.328701  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259111    1238 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328715  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259175    1238 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328729  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: W0817 22:24:52.259224    1238 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	W0817 22:29:30.328745  255057 out.go:239]   Aug 17 22:24:52 no-preload-525875 kubelet[1238]: E0817 22:24:52.259238    1238 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-525875" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-525875' and this object
	I0817 22:29:30.328754  255057 out.go:309] Setting ErrFile to fd 2...
	I0817 22:29:30.328762  255057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:29:32.741952  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.241640  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:33.516723  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:35.516887  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.339646  255057 system_pods.go:59] 8 kube-system pods found
	I0817 22:29:40.339676  255057 system_pods.go:61] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.339681  255057 system_pods.go:61] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.339685  255057 system_pods.go:61] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.339690  255057 system_pods.go:61] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.339694  255057 system_pods.go:61] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.339698  255057 system_pods.go:61] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.339705  255057 system_pods.go:61] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.339711  255057 system_pods.go:61] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.339722  255057 system_pods.go:74] duration metric: took 11.448139171s to wait for pod list to return data ...
	I0817 22:29:40.339730  255057 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:29:40.344246  255057 default_sa.go:45] found service account: "default"
	I0817 22:29:40.344271  255057 default_sa.go:55] duration metric: took 4.534553ms for default service account to be created ...
	I0817 22:29:40.344280  255057 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:29:40.353485  255057 system_pods.go:86] 8 kube-system pods found
	I0817 22:29:40.353521  255057 system_pods.go:89] "coredns-5dd5756b68-b54g4" [3fad219a-90a1-4ec1-b6fe-12632c5f1913] Running
	I0817 22:29:40.353529  255057 system_pods.go:89] "etcd-no-preload-525875" [ecfedded-4607-4006-b464-9504efc6eb53] Running
	I0817 22:29:40.353537  255057 system_pods.go:89] "kube-apiserver-no-preload-525875" [0649b7e1-b20f-4de7-84f2-718b8cb38f2a] Running
	I0817 22:29:40.353546  255057 system_pods.go:89] "kube-controller-manager-no-preload-525875" [312d5bdd-b78f-4a53-bb24-addec8398929] Running
	I0817 22:29:40.353553  255057 system_pods.go:89] "kube-proxy-pzpk2" [4373b29e-6b11-4c28-bbb4-3d97d2151565] Running
	I0817 22:29:40.353560  255057 system_pods.go:89] "kube-scheduler-no-preload-525875" [909d1c39-9768-49f2-bcb4-ef1b146dd999] Running
	I0817 22:29:40.353579  255057 system_pods.go:89] "metrics-server-57f55c9bc5-25p7z" [1069cee0-4d6e-4420-a3e5-c3ca300db03f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:29:40.353589  255057 system_pods.go:89] "storage-provisioner" [f18e7ab1-0b36-4439-9282-fbc4bf804abc] Running
	I0817 22:29:40.353598  255057 system_pods.go:126] duration metric: took 9.313259ms to wait for k8s-apps to be running ...
	I0817 22:29:40.353612  255057 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:29:40.353685  255057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:40.376714  255057 system_svc.go:56] duration metric: took 23.088082ms WaitForService to wait for kubelet.
	I0817 22:29:40.376759  255057 kubeadm.go:581] duration metric: took 4m44.873323742s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:29:40.377191  255057 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:29:40.385016  255057 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:29:40.385043  255057 node_conditions.go:123] node cpu capacity is 2
	I0817 22:29:40.385055  255057 node_conditions.go:105] duration metric: took 7.857619ms to run NodePressure ...
	I0817 22:29:40.385068  255057 start.go:228] waiting for startup goroutines ...
	I0817 22:29:40.385074  255057 start.go:233] waiting for cluster config update ...
	I0817 22:29:40.385085  255057 start.go:242] writing updated cluster config ...
	I0817 22:29:40.385411  255057 ssh_runner.go:195] Run: rm -f paused
	I0817 22:29:40.457420  255057 start.go:600] kubectl: 1.28.0, cluster: 1.28.0-rc.1 (minor skew: 0)
	I0817 22:29:40.460043  255057 out.go:177] * Done! kubectl is now configured to use "no-preload-525875" cluster and "default" namespace by default
	I0817 22:29:37.242898  255491 pod_ready.go:102] pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:37.462917  255491 pod_ready.go:81] duration metric: took 4m0.00026087s waiting for pod "metrics-server-74d5c6b9c-25l6w" in "kube-system" namespace to be "Ready" ...
	E0817 22:29:37.462956  255491 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:29:37.463009  255491 pod_ready.go:38] duration metric: took 4m10.583985022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:29:37.463050  255491 kubeadm.go:640] restartCluster took 4m32.042723788s
	W0817 22:29:37.463141  255491 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:29:37.463185  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:29:37.517852  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:40.016790  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:42.517001  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:45.016757  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:47.291163  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.322979002s)
	I0817 22:29:47.291246  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:29:47.305948  255215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:29:47.316036  255215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:29:47.325470  255215 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:29:47.325519  255215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:29:47.566297  255215 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:29:47.017112  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:49.017246  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:51.018095  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:53.519020  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:56.016627  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.087786  255215 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:29:59.087860  255215 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:29:59.087991  255215 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:29:59.088169  255215 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:29:59.088306  255215 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:29:59.088388  255215 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:29:59.090358  255215 out.go:204]   - Generating certificates and keys ...
	I0817 22:29:59.090460  255215 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:29:59.090547  255215 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:29:59.090660  255215 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:29:59.090766  255215 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:29:59.090886  255215 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:29:59.090976  255215 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:29:59.091060  255215 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:29:59.091152  255215 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:29:59.091250  255215 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:29:59.091350  255215 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:29:59.091435  255215 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:29:59.091514  255215 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:29:59.091589  255215 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:29:59.091655  255215 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:29:59.091759  255215 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:29:59.091836  255215 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:29:59.091960  255215 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:29:59.092070  255215 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:29:59.092127  255215 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:29:59.092207  255215 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:29:59.094268  255215 out.go:204]   - Booting up control plane ...
	I0817 22:29:59.094408  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:29:59.094513  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:29:59.094594  255215 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:29:59.094719  255215 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:29:59.094944  255215 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:29:59.095031  255215 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504676 seconds
	I0817 22:29:59.095206  255215 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:29:59.095401  255215 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:29:59.095494  255215 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:29:59.095757  255215 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-437183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:29:59.095844  255215 kubeadm.go:322] [bootstrap-token] Using token: 0fftkt.nm31ryo8p4990tdr
	I0817 22:29:59.097581  255215 out.go:204]   - Configuring RBAC rules ...
	I0817 22:29:59.097750  255215 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:29:59.097884  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:29:59.098097  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:29:59.098258  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:29:59.098405  255215 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:29:59.098510  255215 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:29:59.098679  255215 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:29:59.098745  255215 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:29:59.098802  255215 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:29:59.098811  255215 kubeadm.go:322] 
	I0817 22:29:59.098889  255215 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:29:59.098898  255215 kubeadm.go:322] 
	I0817 22:29:59.099010  255215 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:29:59.099033  255215 kubeadm.go:322] 
	I0817 22:29:59.099069  255215 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:29:59.099142  255215 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:29:59.099221  255215 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:29:59.099232  255215 kubeadm.go:322] 
	I0817 22:29:59.099297  255215 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:29:59.099307  255215 kubeadm.go:322] 
	I0817 22:29:59.099365  255215 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:29:59.099374  255215 kubeadm.go:322] 
	I0817 22:29:59.099446  255215 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:29:59.099552  255215 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:29:59.099660  255215 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:29:59.099670  255215 kubeadm.go:322] 
	I0817 22:29:59.099799  255215 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:29:59.099909  255215 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:29:59.099917  255215 kubeadm.go:322] 
	I0817 22:29:59.100037  255215 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100173  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:29:59.100205  255215 kubeadm.go:322] 	--control-plane 
	I0817 22:29:59.100218  255215 kubeadm.go:322] 
	I0817 22:29:59.100348  255215 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:29:59.100359  255215 kubeadm.go:322] 
	I0817 22:29:59.100472  255215 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0fftkt.nm31ryo8p4990tdr \
	I0817 22:29:59.100610  255215 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:29:59.100639  255215 cni.go:84] Creating CNI manager for ""
	I0817 22:29:59.100650  255215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:29:59.102534  255215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:29:58.017949  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:00.519619  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:29:59.104107  255215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:29:59.128756  255215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:29:59.172002  255215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.172077  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=embed-certs-437183 minikube.k8s.io/updated_at=2023_08_17T22_29_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.717974  255215 ops.go:34] apiserver oom_adj: -16
	I0817 22:29:59.718154  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:29:59.815994  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.419198  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:00.919196  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.419096  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:01.919517  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:02.419076  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.017120  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:05.017919  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:02.919289  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.419268  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:03.919021  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.418663  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:04.919015  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.419325  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:05.919309  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.418701  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:06.919301  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.418670  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:07.919445  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.419363  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:08.918988  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.418788  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:09.918948  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.418731  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:10.919293  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.419374  255215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:11.578800  255215 kubeadm.go:1081] duration metric: took 12.40679081s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:11.578850  255215 kubeadm.go:406] StartCluster complete in 5m30.729798213s
	I0817 22:30:11.578877  255215 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.578990  255215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:11.581741  255215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:11.582107  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:11.582305  255215 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:11.582414  255215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-437183"
	I0817 22:30:11.582435  255215 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-437183"
	I0817 22:30:11.582433  255215 config.go:182] Loaded profile config "embed-certs-437183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:11.582436  255215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-437183"
	I0817 22:30:11.582449  255215 addons.go:69] Setting metrics-server=true in profile "embed-certs-437183"
	I0817 22:30:11.582461  255215 addons.go:231] Setting addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:11.582465  255215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-437183"
	W0817 22:30:11.582467  255215 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:11.582521  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	W0817 22:30:11.582443  255215 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:11.582609  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.582956  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582976  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.582992  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583000  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.583326  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.583361  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.600606  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0817 22:30:11.601162  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.601890  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.601918  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.602386  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.603044  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.603110  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.603922  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0817 22:30:11.604193  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36125
	I0817 22:30:11.604476  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.604711  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.605320  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605342  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605474  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.605500  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.605874  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.605927  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.606184  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.606616  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.606654  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.622026  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0817 22:30:11.622822  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.623522  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.623556  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.624021  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.624332  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.626478  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.629171  255215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:11.627845  255215 addons.go:231] Setting addon default-storageclass=true in "embed-certs-437183"
	W0817 22:30:11.629212  255215 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:11.629267  255215 host.go:66] Checking if "embed-certs-437183" exists ...
	I0817 22:30:11.628437  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0817 22:30:11.629683  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.631294  255215 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.631295  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.629905  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.631315  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:11.631339  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.632333  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.632356  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.632860  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.633085  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.635520  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.635727  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.638116  255215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:09.776936  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.313725935s)
	I0817 22:30:09.777008  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:09.794808  255491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:09.806086  255491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:09.818495  255491 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:09.818547  255491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0817 22:30:10.061316  255491 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:30:11.636353  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.636644  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.640483  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.640486  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:11.640508  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:11.640535  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.640703  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.640905  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.641073  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.645685  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646351  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.646376  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.646867  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.647096  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.647286  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.647444  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.655819  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0817 22:30:11.656540  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.657308  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.657326  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.657864  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.658485  255215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:11.658520  255215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:11.679610  255215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0817 22:30:11.680268  255215 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:11.680977  255215 main.go:141] libmachine: Using API Version  1
	I0817 22:30:11.681013  255215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:11.681485  255215 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:11.681722  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetState
	I0817 22:30:11.683711  255215 main.go:141] libmachine: (embed-certs-437183) Calling .DriverName
	I0817 22:30:11.686274  255215 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.686297  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:11.686323  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHHostname
	I0817 22:30:11.692154  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHPort
	I0817 22:30:11.692160  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692245  255215 main.go:141] libmachine: (embed-certs-437183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c0:2b", ip: ""} in network mk-embed-certs-437183: {Iface:virbr3 ExpiryTime:2023-08-17 23:24:26 +0000 UTC Type:0 Mac:52:54:00:c7:c0:2b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:embed-certs-437183 Clientid:01:52:54:00:c7:c0:2b}
	I0817 22:30:11.692288  255215 main.go:141] libmachine: (embed-certs-437183) DBG | domain embed-certs-437183 has defined IP address 192.168.39.186 and MAC address 52:54:00:c7:c0:2b in network mk-embed-certs-437183
	I0817 22:30:11.692447  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHKeyPath
	I0817 22:30:11.692691  255215 main.go:141] libmachine: (embed-certs-437183) Calling .GetSSHUsername
	I0817 22:30:11.692899  255215 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/embed-certs-437183/id_rsa Username:docker}
	I0817 22:30:11.742259  255215 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-437183" context rescaled to 1 replicas
	I0817 22:30:11.742317  255215 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:11.744647  255215 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:07.516999  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:10.016647  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:11.746674  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:11.833127  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:11.853282  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:11.853316  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:11.858219  255215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.858353  255215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:11.889330  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:11.896554  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:11.896595  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:11.906260  255215 node_ready.go:49] node "embed-certs-437183" has status "Ready":"True"
	I0817 22:30:11.906292  255215 node_ready.go:38] duration metric: took 48.027482ms waiting for node "embed-certs-437183" to be "Ready" ...
	I0817 22:30:11.906305  255215 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:11.949379  255215 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:11.949409  255215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:12.023543  255215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:12.131426  255215 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:14.420517  255215 pod_ready.go:102] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.647805  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.814629092s)
	I0817 22:30:14.647842  255215 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.78945104s)
	I0817 22:30:14.647874  255215 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:14.647904  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.758517925s)
	I0817 22:30:14.647915  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648017  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648042  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648067  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648478  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.648532  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.648626  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.648638  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.648656  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.648882  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.649025  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.649050  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.649069  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.650529  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.650577  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.650586  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.650600  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:14.650614  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:14.651171  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.651230  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:14.652509  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652529  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:14.652688  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:14.652708  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.175766  255215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.152137099s)
	I0817 22:30:15.175888  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.175915  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176344  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.176343  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.176428  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.176452  255215 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:15.176488  255215 main.go:141] libmachine: (embed-certs-437183) Calling .Close
	I0817 22:30:15.176915  255215 main.go:141] libmachine: (embed-certs-437183) DBG | Closing plugin on server side
	I0817 22:30:15.178804  255215 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:15.178827  255215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:15.178840  255215 addons.go:467] Verifying addon metrics-server=true in "embed-certs-437183"
	I0817 22:30:15.180928  255215 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0817 22:30:12.018605  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:14.519226  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:15.182515  255215 addons.go:502] enable addons completed in 3.600222172s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0817 22:30:16.920634  255215 pod_ready.go:92] pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.920664  255215 pod_ready.go:81] duration metric: took 4.789200515s waiting for pod "coredns-5d78c9869d-ghvnx" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.920674  255215 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937440  255215 pod_ready.go:92] pod "etcd-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.937469  255215 pod_ready.go:81] duration metric: took 16.789093ms waiting for pod "etcd-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.937483  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944411  255215 pod_ready.go:92] pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.944437  255215 pod_ready.go:81] duration metric: took 6.944986ms waiting for pod "kube-apiserver-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.944451  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952239  255215 pod_ready.go:92] pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:16.952267  255215 pod_ready.go:81] duration metric: took 7.807798ms waiting for pod "kube-controller-manager-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:16.952281  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815597  255215 pod_ready.go:92] pod "kube-proxy-2f6jz" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:17.815630  255215 pod_ready.go:81] duration metric: took 863.340907ms waiting for pod "kube-proxy-2f6jz" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:17.815644  255215 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108648  255215 pod_ready.go:92] pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:18.108683  255215 pod_ready.go:81] duration metric: took 293.029473ms waiting for pod "kube-scheduler-embed-certs-437183" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:18.108693  255215 pod_ready.go:38] duration metric: took 6.202373203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:18.108726  255215 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:18.108789  255215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:18.129379  255215 api_server.go:72] duration metric: took 6.38701969s to wait for apiserver process to appear ...
	I0817 22:30:18.129409  255215 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:18.129425  255215 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0817 22:30:18.138226  255215 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0817 22:30:18.141542  255215 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:18.141568  255215 api_server.go:131] duration metric: took 12.152138ms to wait for apiserver health ...
	I0817 22:30:18.141579  255215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:18.312736  255215 system_pods.go:59] 8 kube-system pods found
	I0817 22:30:18.312782  255215 system_pods.go:61] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.312790  255215 system_pods.go:61] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.312798  255215 system_pods.go:61] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.312804  255215 system_pods.go:61] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.312811  255215 system_pods.go:61] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.312817  255215 system_pods.go:61] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.312831  255215 system_pods.go:61] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.312841  255215 system_pods.go:61] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.312855  255215 system_pods.go:74] duration metric: took 171.269837ms to wait for pod list to return data ...
	I0817 22:30:18.312868  255215 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:18.511271  255215 default_sa.go:45] found service account: "default"
	I0817 22:30:18.511380  255215 default_sa.go:55] duration metric: took 198.492073ms for default service account to be created ...
	I0817 22:30:18.511401  255215 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:18.710880  255215 system_pods.go:86] 8 kube-system pods found
	I0817 22:30:18.710911  255215 system_pods.go:89] "coredns-5d78c9869d-ghvnx" [64d11a2e-bf5b-4e8c-a22f-daf40cb94c1b] Running
	I0817 22:30:18.710917  255215 system_pods.go:89] "etcd-embed-certs-437183" [f3297b73-c7a0-4c13-bfb1-cb185626631d] Running
	I0817 22:30:18.710921  255215 system_pods.go:89] "kube-apiserver-embed-certs-437183" [73e0b67a-d61d-4567-bdbb-9a91867b1805] Running
	I0817 22:30:18.710926  255215 system_pods.go:89] "kube-controller-manager-embed-certs-437183" [97453db3-50a0-4434-8449-fec9cb705644] Running
	I0817 22:30:18.710929  255215 system_pods.go:89] "kube-proxy-2f6jz" [c82a9796-e23b-4823-a3f2-d180b9aa866f] Running
	I0817 22:30:18.710933  255215 system_pods.go:89] "kube-scheduler-embed-certs-437183" [71f0441e-641d-487c-9d27-032ee8d0586f] Running
	I0817 22:30:18.710943  255215 system_pods.go:89] "metrics-server-74d5c6b9c-9zstm" [a881915b-d7e9-431f-8666-d225a4720a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:18.710949  255215 system_pods.go:89] "storage-provisioner" [43cb4a9a-10c6-43f7-8d58-7348e2510947] Running
	I0817 22:30:18.710958  255215 system_pods.go:126] duration metric: took 199.549571ms to wait for k8s-apps to be running ...
	I0817 22:30:18.710967  255215 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:18.711013  255215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:18.725788  255215 system_svc.go:56] duration metric: took 14.807351ms WaitForService to wait for kubelet.
	I0817 22:30:18.725819  255215 kubeadm.go:581] duration metric: took 6.983465617s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:18.725846  255215 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:18.908038  255215 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:18.908079  255215 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:18.908093  255215 node_conditions.go:105] duration metric: took 182.240177ms to run NodePressure ...
	I0817 22:30:18.908108  255215 start.go:228] waiting for startup goroutines ...
	I0817 22:30:18.908127  255215 start.go:233] waiting for cluster config update ...
	I0817 22:30:18.908142  255215 start.go:242] writing updated cluster config ...
	I0817 22:30:18.908536  255215 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:18.962718  255215 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:18.965052  255215 out.go:177] * Done! kubectl is now configured to use "embed-certs-437183" cluster and "default" namespace by default
	I0817 22:30:17.018314  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:19.517055  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:21.517216  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:22.302082  255491 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 22:30:22.302198  255491 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:22.302316  255491 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:22.302392  255491 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:22.302537  255491 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:22.302623  255491 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:22.304947  255491 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:22.305043  255491 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:22.305112  255491 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:22.305227  255491 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:22.305295  255491 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:22.305389  255491 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:22.305466  255491 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:22.305540  255491 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:22.305614  255491 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:22.305703  255491 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:22.305801  255491 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:22.305861  255491 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:22.305956  255491 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:22.306043  255491 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:22.306141  255491 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:22.306231  255491 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:22.306313  255491 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:22.306462  255491 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:22.306597  255491 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:22.306674  255491 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 22:30:22.306778  255491 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:22.308372  255491 out.go:204]   - Booting up control plane ...
	I0817 22:30:22.308478  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:22.308565  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:22.308644  255491 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:22.308735  255491 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:22.308942  255491 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:22.309046  255491 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003655 seconds
	I0817 22:30:22.309195  255491 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:22.309352  255491 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:22.309430  255491 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:22.309656  255491 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-321287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 22:30:22.309729  255491 kubeadm.go:322] [bootstrap-token] Using token: vtugjh.yrdml71jezyixk01
	I0817 22:30:22.311499  255491 out.go:204]   - Configuring RBAC rules ...
	I0817 22:30:22.311610  255491 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:30:22.311706  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 22:30:22.311887  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:30:22.312069  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:30:22.312240  255491 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:30:22.312338  255491 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:30:22.312462  255491 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 22:30:22.312516  255491 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:30:22.312583  255491 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:30:22.312595  255491 kubeadm.go:322] 
	I0817 22:30:22.312680  255491 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:30:22.312693  255491 kubeadm.go:322] 
	I0817 22:30:22.312798  255491 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:30:22.312806  255491 kubeadm.go:322] 
	I0817 22:30:22.312847  255491 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:30:22.312926  255491 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:30:22.313008  255491 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:30:22.313016  255491 kubeadm.go:322] 
	I0817 22:30:22.313073  255491 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 22:30:22.313092  255491 kubeadm.go:322] 
	I0817 22:30:22.313135  255491 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 22:30:22.313141  255491 kubeadm.go:322] 
	I0817 22:30:22.313180  255491 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:30:22.313271  255491 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:30:22.313397  255491 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:30:22.313421  255491 kubeadm.go:322] 
	I0817 22:30:22.313561  255491 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 22:30:22.313670  255491 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:30:22.313691  255491 kubeadm.go:322] 
	I0817 22:30:22.313790  255491 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.313910  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:30:22.313930  255491 kubeadm.go:322] 	--control-plane 
	I0817 22:30:22.313933  255491 kubeadm.go:322] 
	I0817 22:30:22.314017  255491 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:30:22.314029  255491 kubeadm.go:322] 
	I0817 22:30:22.314161  255491 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token vtugjh.yrdml71jezyixk01 \
	I0817 22:30:22.314324  255491 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:30:22.314342  255491 cni.go:84] Creating CNI manager for ""
	I0817 22:30:22.314352  255491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:30:22.316092  255491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:30:22.317823  255491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:30:22.330216  255491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:30:22.364427  255491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:30:22.364530  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.364541  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=default-k8s-diff-port-321287 minikube.k8s.io/updated_at=2023_08_17T22_30_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.398800  255491 ops.go:34] apiserver oom_adj: -16
	I0817 22:30:22.789239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:22.908906  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.507279  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.007071  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:24.507204  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.007980  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:25.507764  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.007834  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:26.507449  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:23.518185  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:26.017066  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:27.007162  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:27.507978  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.008024  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.507376  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.007583  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:29.507355  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.007416  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:30.507014  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.007539  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:31.507116  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:28.516778  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:31.016979  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:32.007363  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:32.508019  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.007624  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:33.507337  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.007239  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:34.507255  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.007804  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.507323  255491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:30:35.647403  255491 kubeadm.go:1081] duration metric: took 13.282950211s to wait for elevateKubeSystemPrivileges.
	I0817 22:30:35.647439  255491 kubeadm.go:406] StartCluster complete in 5m30.275148595s
	I0817 22:30:35.647465  255491 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.647562  255491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:30:35.649294  255491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:30:35.649625  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:30:35.649672  255491 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:30:35.649793  255491 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649815  255491 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.649827  255491 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:30:35.649857  255491 config.go:182] Loaded profile config "default-k8s-diff-port-321287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:30:35.649897  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.649914  255491 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.649931  255491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-321287"
	I0817 22:30:35.650130  255491 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-321287"
	I0817 22:30:35.650154  255491 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.650163  255491 addons.go:240] addon metrics-server should already be in state true
	I0817 22:30:35.650207  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.650360  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650362  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650397  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650456  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.650616  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.650660  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.666863  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0817 22:30:35.666883  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0817 22:30:35.667444  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.667657  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.668085  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668105  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668245  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.668256  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.668780  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.669523  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.669553  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.670006  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0817 22:30:35.670382  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.670448  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.670513  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.670985  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.671005  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.671824  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.672870  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.672905  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.682146  255491 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-321287"
	W0817 22:30:35.682167  255491 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:30:35.682200  255491 host.go:66] Checking if "default-k8s-diff-port-321287" exists ...
	I0817 22:30:35.682640  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.682674  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.690436  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0817 22:30:35.691039  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.691642  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.691666  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.692056  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.692328  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.692416  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0817 22:30:35.693048  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.693566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.693588  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.693974  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.694180  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.694314  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.696623  255491 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:30:35.696015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.698535  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:30:35.698555  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:30:35.698593  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.700284  255491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:30:35.702071  255491 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.702097  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:30:35.702127  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.703050  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.703111  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703139  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.703161  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.703297  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.703498  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.703605  255491 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-321287" context rescaled to 1 replicas
	I0817 22:30:35.703641  255491 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:30:35.706989  255491 out.go:177] * Verifying Kubernetes components...
	I0817 22:30:35.703707  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.707227  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.707832  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40363
	I0817 22:30:35.708116  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.709223  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:35.709358  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.709408  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.709426  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.709650  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.709767  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.709979  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.710587  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.710608  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.711008  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.711578  255491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:30:35.711631  255491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:30:35.730317  255491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35051
	I0817 22:30:35.730875  255491 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:30:35.731566  255491 main.go:141] libmachine: Using API Version  1
	I0817 22:30:35.731595  255491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:30:35.731993  255491 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:30:35.732228  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetState
	I0817 22:30:35.734475  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .DriverName
	I0817 22:30:35.734778  255491 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.734799  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:30:35.734822  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHHostname
	I0817 22:30:35.737878  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738337  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:e5:b8", ip: ""} in network mk-default-k8s-diff-port-321287: {Iface:virbr1 ExpiryTime:2023-08-17 23:24:46 +0000 UTC Type:0 Mac:52:54:00:24:e5:b8 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-321287 Clientid:01:52:54:00:24:e5:b8}
	I0817 22:30:35.738359  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | domain default-k8s-diff-port-321287 has defined IP address 192.168.50.30 and MAC address 52:54:00:24:e5:b8 in network mk-default-k8s-diff-port-321287
	I0817 22:30:35.738478  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHPort
	I0817 22:30:35.739396  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHKeyPath
	I0817 22:30:35.739599  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .GetSSHUsername
	I0817 22:30:35.739850  255491 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/default-k8s-diff-port-321287/id_rsa Username:docker}
	I0817 22:30:35.902960  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:30:35.913205  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:30:35.936947  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:30:35.936977  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:30:35.977717  255491 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.977876  255491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:30:35.984231  255491 node_ready.go:49] node "default-k8s-diff-port-321287" has status "Ready":"True"
	I0817 22:30:35.984286  255491 node_ready.go:38] duration metric: took 6.524258ms waiting for node "default-k8s-diff-port-321287" to be "Ready" ...
	I0817 22:30:35.984302  255491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:36.008884  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:30:36.008915  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:30:36.010024  255491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.073572  255491 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.073607  255491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:30:36.139665  255491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:30:36.382827  255491 pod_ready.go:92] pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.382863  255491 pod_ready.go:81] duration metric: took 372.809939ms waiting for pod "etcd-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.382878  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513607  255491 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.513640  255491 pod_ready.go:81] duration metric: took 130.752675ms waiting for pod "kube-apiserver-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.513653  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610942  255491 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:36.610974  255491 pod_ready.go:81] duration metric: took 97.312774ms waiting for pod "kube-controller-manager-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:36.610989  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:33.017198  254975 pod_ready.go:102] pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:34.709633  254975 pod_ready.go:81] duration metric: took 4m0.001081095s waiting for pod "metrics-server-74d5856cc6-xv69h" in "kube-system" namespace to be "Ready" ...
	E0817 22:30:34.709679  254975 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0817 22:30:34.709709  254975 pod_ready.go:38] duration metric: took 4m1.187941338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:34.709762  254975 kubeadm.go:640] restartCluster took 5m3.210628062s
	W0817 22:30:34.709854  254975 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 22:30:34.709895  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0817 22:30:38.629738  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.716488882s)
	I0817 22:30:38.629799  255491 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.651889874s)
	I0817 22:30:38.629829  255491 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0817 22:30:38.629802  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629871  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.629753  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.726738359s)
	I0817 22:30:38.629944  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.629971  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630368  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630389  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630401  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630429  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630528  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630559  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630578  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.630587  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.630677  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.630707  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630732  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.630973  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.630991  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.631004  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:38.631007  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.631015  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:38.632993  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:38.633019  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:38.633033  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:38.758987  255491 pod_ready.go:102] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"False"
	I0817 22:30:39.084274  255491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.944554423s)
	I0817 22:30:39.084336  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084352  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.084785  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.084799  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) DBG | Closing plugin on server side
	I0817 22:30:39.084817  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.084829  255491 main.go:141] libmachine: Making call to close driver server
	I0817 22:30:39.084842  255491 main.go:141] libmachine: (default-k8s-diff-port-321287) Calling .Close
	I0817 22:30:39.085152  255491 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:30:39.085168  255491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:30:39.085179  255491 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-321287"
	I0817 22:30:39.087296  255491 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:30:39.089202  255491 addons.go:502] enable addons completed in 3.439530445s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:30:41.238328  255491 pod_ready.go:92] pod "kube-proxy-k2jz7" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.238358  255491 pod_ready.go:81] duration metric: took 4.627360634s waiting for pod "kube-proxy-k2jz7" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.238376  255491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.244985  255491 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace has status "Ready":"True"
	I0817 22:30:41.245011  255491 pod_ready.go:81] duration metric: took 6.626883ms waiting for pod "kube-scheduler-default-k8s-diff-port-321287" in "kube-system" namespace to be "Ready" ...
	I0817 22:30:41.245022  255491 pod_ready.go:38] duration metric: took 5.260700173s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:30:41.245042  255491 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:30:41.245097  255491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:30:41.262899  255491 api_server.go:72] duration metric: took 5.559222986s to wait for apiserver process to appear ...
	I0817 22:30:41.262935  255491 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:30:41.262957  255491 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0817 22:30:41.268642  255491 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0817 22:30:41.269921  255491 api_server.go:141] control plane version: v1.27.4
	I0817 22:30:41.269947  255491 api_server.go:131] duration metric: took 7.005146ms to wait for apiserver health ...
	I0817 22:30:41.269955  255491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:30:41.276807  255491 system_pods.go:59] 9 kube-system pods found
	I0817 22:30:41.276844  255491 system_pods.go:61] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.276855  255491 system_pods.go:61] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.276863  255491 system_pods.go:61] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.276868  255491 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.276875  255491 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.276883  255491 system_pods.go:61] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.276890  255491 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.276908  255491 system_pods.go:61] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.276918  255491 system_pods.go:61] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.276929  255491 system_pods.go:74] duration metric: took 6.967523ms to wait for pod list to return data ...
	I0817 22:30:41.276941  255491 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:30:41.279696  255491 default_sa.go:45] found service account: "default"
	I0817 22:30:41.279724  255491 default_sa.go:55] duration metric: took 2.773544ms for default service account to be created ...
	I0817 22:30:41.279735  255491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:30:41.286220  255491 system_pods.go:86] 9 kube-system pods found
	I0817 22:30:41.286258  255491 system_pods.go:89] "coredns-5d78c9869d-2gh8n" [44728d42-fce0-4a11-ba30-094a44b9313a] Running
	I0817 22:30:41.286269  255491 system_pods.go:89] "coredns-5d78c9869d-zk9r5" [06faaf1b-1c1b-4bb6-b7b3-f6437a9f5cc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 22:30:41.286277  255491 system_pods.go:89] "etcd-default-k8s-diff-port-321287" [757ff4e3-befc-47f9-a6b8-1015658f7d3c] Running
	I0817 22:30:41.286283  255491 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-321287" [76e2446e-8187-46ba-b8aa-de2293b1addf] Running
	I0817 22:30:41.286287  255491 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-321287" [4168ec64-1add-4924-ad73-70d492876445] Running
	I0817 22:30:41.286292  255491 system_pods.go:89] "kube-proxy-k2jz7" [1fedb8b2-1800-4933-b964-6080cc760045] Running
	I0817 22:30:41.286296  255491 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-321287" [0fdbc928-ea26-40d5-bc1e-427bc08b8ed9] Running
	I0817 22:30:41.286302  255491 system_pods.go:89] "metrics-server-74d5c6b9c-lw5bp" [b197e3ce-ee02-467c-b87f-de8bc2b6802f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:30:41.286306  255491 system_pods.go:89] "storage-provisioner" [02b2bd5a-9e11-4476-81c5-fe927c4ef543] Running
	I0817 22:30:41.286316  255491 system_pods.go:126] duration metric: took 6.576272ms to wait for k8s-apps to be running ...
	I0817 22:30:41.286326  255491 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:30:41.286373  255491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:41.301841  255491 system_svc.go:56] duration metric: took 15.499888ms WaitForService to wait for kubelet.
	I0817 22:30:41.301874  255491 kubeadm.go:581] duration metric: took 5.598205886s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:30:41.301898  255491 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:30:41.306253  255491 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:30:41.306289  255491 node_conditions.go:123] node cpu capacity is 2
	I0817 22:30:41.306300  255491 node_conditions.go:105] duration metric: took 4.396496ms to run NodePressure ...
	I0817 22:30:41.306311  255491 start.go:228] waiting for startup goroutines ...
	I0817 22:30:41.306320  255491 start.go:233] waiting for cluster config update ...
	I0817 22:30:41.306329  255491 start.go:242] writing updated cluster config ...
	I0817 22:30:41.306617  255491 ssh_runner.go:195] Run: rm -f paused
	I0817 22:30:41.363947  255491 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 22:30:41.366167  255491 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-321287" cluster and "default" namespace by default
	I0817 22:30:47.861835  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.151914062s)
	I0817 22:30:47.861926  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:30:47.877704  254975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 22:30:47.888385  254975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 22:30:47.898212  254975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 22:30:47.898269  254975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0817 22:30:47.957871  254975 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0817 22:30:47.958020  254975 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 22:30:48.121563  254975 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 22:30:48.121724  254975 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 22:30:48.121869  254975 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 22:30:48.316212  254975 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 22:30:48.316379  254975 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 22:30:48.324040  254975 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0817 22:30:48.453946  254975 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 22:30:48.456278  254975 out.go:204]   - Generating certificates and keys ...
	I0817 22:30:48.456403  254975 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 22:30:48.456486  254975 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 22:30:48.456629  254975 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0817 22:30:48.456723  254975 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0817 22:30:48.456831  254975 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0817 22:30:48.456916  254975 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0817 22:30:48.456992  254975 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0817 22:30:48.457084  254975 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0817 22:30:48.457233  254975 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0817 22:30:48.457347  254975 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0817 22:30:48.457400  254975 kubeadm.go:322] [certs] Using the existing "sa" key
	I0817 22:30:48.457478  254975 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 22:30:48.599977  254975 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 22:30:48.760474  254975 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 22:30:48.873066  254975 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 22:30:48.958450  254975 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 22:30:48.959335  254975 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 22:30:48.961565  254975 out.go:204]   - Booting up control plane ...
	I0817 22:30:48.961672  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 22:30:48.972854  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 22:30:48.974149  254975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 22:30:48.975110  254975 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 22:30:48.981334  254975 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 22:30:58.986028  254975 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004044 seconds
	I0817 22:30:58.986232  254975 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 22:30:59.005484  254975 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 22:30:59.530563  254975 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 22:30:59.530730  254975 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-294781 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0817 22:31:00.039739  254975 kubeadm.go:322] [bootstrap-token] Using token: y5v57w.cds9r5wk990e6rgq
	I0817 22:31:00.041700  254975 out.go:204]   - Configuring RBAC rules ...
	I0817 22:31:00.041831  254975 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 22:31:00.051302  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 22:31:00.056478  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 22:31:00.060403  254975 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 22:31:00.065454  254975 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 22:31:00.155583  254975 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 22:31:00.472429  254975 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 22:31:00.474442  254975 kubeadm.go:322] 
	I0817 22:31:00.474512  254975 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 22:31:00.474554  254975 kubeadm.go:322] 
	I0817 22:31:00.474671  254975 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 22:31:00.474686  254975 kubeadm.go:322] 
	I0817 22:31:00.474708  254975 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 22:31:00.474808  254975 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 22:31:00.474883  254975 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 22:31:00.474895  254975 kubeadm.go:322] 
	I0817 22:31:00.474973  254975 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 22:31:00.475082  254975 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 22:31:00.475179  254975 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 22:31:00.475193  254975 kubeadm.go:322] 
	I0817 22:31:00.475308  254975 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0817 22:31:00.475421  254975 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 22:31:00.475431  254975 kubeadm.go:322] 
	I0817 22:31:00.475551  254975 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.475696  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 \
	I0817 22:31:00.475750  254975 kubeadm.go:322]     --control-plane 	  
	I0817 22:31:00.475759  254975 kubeadm.go:322] 
	I0817 22:31:00.475881  254975 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 22:31:00.475937  254975 kubeadm.go:322] 
	I0817 22:31:00.476044  254975 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y5v57w.cds9r5wk990e6rgq \
	I0817 22:31:00.476196  254975 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:906b1efa8558b6c2a7509963c61816e54d6828fc734ddb5fbc5a67fcb3b3e944 
	I0817 22:31:00.476725  254975 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 22:31:00.476766  254975 cni.go:84] Creating CNI manager for ""
	I0817 22:31:00.476782  254975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 22:31:00.478932  254975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 22:31:00.480754  254975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0817 22:31:00.496449  254975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 22:31:00.527578  254975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 22:31:00.527658  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.527769  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=old-k8s-version-294781 minikube.k8s.io/updated_at=2023_08_17T22_31_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.809784  254975 ops.go:34] apiserver oom_adj: -16
	I0817 22:31:00.809925  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:00.991957  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:01.627311  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.126890  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:02.626673  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.127657  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:03.627284  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.127320  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:04.627026  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.127336  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:05.626721  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.127279  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:06.626697  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.127307  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:07.626920  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.127266  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:08.626970  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.126923  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:09.626808  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.127298  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:10.627182  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.126639  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:11.626681  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.127321  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:12.626904  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.127274  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:13.627272  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.127457  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:14.627280  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.127333  254975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 22:31:15.231130  254975 kubeadm.go:1081] duration metric: took 14.703542822s to wait for elevateKubeSystemPrivileges.
	I0817 22:31:15.231183  254975 kubeadm.go:406] StartCluster complete in 5m43.780243338s
	I0817 22:31:15.231254  254975 settings.go:142] acquiring lock: {Name:mk4322b94ea7749c86239aace2065ea3ce60c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.231391  254975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:31:15.233245  254975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-203458/kubeconfig: {Name:mk78968eb8ec30ce311c742d834b8fb8e540240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 22:31:15.233533  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 22:31:15.233848  254975 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 22:31:15.233927  254975 config.go:182] Loaded profile config "old-k8s-version-294781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0817 22:31:15.233947  254975 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-294781"
	I0817 22:31:15.233968  254975 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-294781"
	W0817 22:31:15.233977  254975 addons.go:240] addon storage-provisioner should already be in state true
	I0817 22:31:15.233983  254975 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234001  254975 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-294781"
	I0817 22:31:15.234007  254975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-294781"
	I0817 22:31:15.234021  254975 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-294781"
	W0817 22:31:15.234040  254975 addons.go:240] addon metrics-server should already be in state true
	I0817 22:31:15.234075  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234097  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234549  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.234576  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234581  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.234650  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.252847  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0817 22:31:15.252891  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38451
	I0817 22:31:15.253743  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.253833  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.254616  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254632  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.254713  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0817 22:31:15.254887  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.254906  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.255216  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255276  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.255294  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.255865  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255872  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255895  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.255960  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.255977  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.256400  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.256604  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.269860  254975 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-294781"
	W0817 22:31:15.269883  254975 addons.go:240] addon default-storageclass should already be in state true
	I0817 22:31:15.269911  254975 host.go:66] Checking if "old-k8s-version-294781" exists ...
	I0817 22:31:15.270304  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.270335  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.273014  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0817 22:31:15.273532  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.274114  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.274134  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.274549  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.274769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.276415  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.276491  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0817 22:31:15.278935  254975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 22:31:15.277380  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.278041  254975 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-294781" context rescaled to 1 replicas
	I0817 22:31:15.280642  254975 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 22:31:15.282441  254975 out.go:177] * Verifying Kubernetes components...
	I0817 22:31:15.280856  254975 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.281832  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.284263  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.284347  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 22:31:15.284348  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:31:15.284366  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.285256  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.285580  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.288289  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.288456  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.290643  254975 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0817 22:31:15.289601  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.289769  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.292678  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 22:31:15.292693  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 22:31:15.292721  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.292776  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.293060  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.293277  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.293791  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.297193  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0817 22:31:15.297816  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.298486  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.298506  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.298962  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.299508  254975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 22:31:15.299531  254975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 22:31:15.300275  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.300994  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.301024  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.301098  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.301296  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.301502  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.301651  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.321283  254975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0817 22:31:15.321876  254975 main.go:141] libmachine: () Calling .GetVersion
	I0817 22:31:15.322943  254975 main.go:141] libmachine: Using API Version  1
	I0817 22:31:15.322971  254975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 22:31:15.323496  254975 main.go:141] libmachine: () Calling .GetMachineName
	I0817 22:31:15.323842  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetState
	I0817 22:31:15.326563  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .DriverName
	I0817 22:31:15.326910  254975 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.326933  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 22:31:15.326957  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHHostname
	I0817 22:31:15.330190  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.330947  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:be:6b", ip: ""} in network mk-old-k8s-version-294781: {Iface:virbr4 ExpiryTime:2023-08-17 23:14:21 +0000 UTC Type:0 Mac:52:54:00:8b:be:6b Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:old-k8s-version-294781 Clientid:01:52:54:00:8b:be:6b}
	I0817 22:31:15.330978  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | domain old-k8s-version-294781 has defined IP address 192.168.72.56 and MAC address 52:54:00:8b:be:6b in network mk-old-k8s-version-294781
	I0817 22:31:15.331193  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHPort
	I0817 22:31:15.331422  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHKeyPath
	I0817 22:31:15.331552  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .GetSSHUsername
	I0817 22:31:15.331681  254975 sshutil.go:53] new ssh client: &{IP:192.168.72.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/old-k8s-version-294781/id_rsa Username:docker}
	I0817 22:31:15.497277  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 22:31:15.529500  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 22:31:15.531359  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 22:31:15.531381  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0817 22:31:15.585477  254975 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.585494  254975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 22:31:15.590969  254975 node_ready.go:49] node "old-k8s-version-294781" has status "Ready":"True"
	I0817 22:31:15.591001  254975 node_ready.go:38] duration metric: took 5.470452ms waiting for node "old-k8s-version-294781" to be "Ready" ...
	I0817 22:31:15.591012  254975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:15.594026  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 22:31:15.594077  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 22:31:15.596784  254975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:15.638420  254975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:15.638455  254975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 22:31:15.707735  254975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 22:31:16.690916  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.193582768s)
	I0817 22:31:16.690987  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691002  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691002  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161462189s)
	I0817 22:31:16.691042  254975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105375097s)
	I0817 22:31:16.691044  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691217  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691158  254975 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0817 22:31:16.691422  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691464  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691490  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691530  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691561  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.691512  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691586  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.691603  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.691630  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.691813  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.691832  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692047  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692086  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.692110  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.692130  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.692114  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.692460  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.692480  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828440  254975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.120652237s)
	I0817 22:31:16.828511  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828525  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.828913  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.828939  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.828952  254975 main.go:141] libmachine: Making call to close driver server
	I0817 22:31:16.828963  254975 main.go:141] libmachine: (old-k8s-version-294781) Calling .Close
	I0817 22:31:16.829228  254975 main.go:141] libmachine: Successfully made call to close driver server
	I0817 22:31:16.829252  254975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0817 22:31:16.829264  254975 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-294781"
	I0817 22:31:16.829279  254975 main.go:141] libmachine: (old-k8s-version-294781) DBG | Closing plugin on server side
	I0817 22:31:16.831430  254975 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0817 22:31:16.834005  254975 addons.go:502] enable addons completed in 1.600151352s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0817 22:31:17.618673  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.110224  254975 pod_ready.go:102] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"False"
	I0817 22:31:20.610989  254975 pod_ready.go:92] pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.611015  254975 pod_ready.go:81] duration metric: took 5.014205232s waiting for pod "coredns-5644d7b6d9-b9p7t" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.611025  254975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616618  254975 pod_ready.go:92] pod "kube-proxy-44jmp" in "kube-system" namespace has status "Ready":"True"
	I0817 22:31:20.616639  254975 pod_ready.go:81] duration metric: took 5.608097ms waiting for pod "kube-proxy-44jmp" in "kube-system" namespace to be "Ready" ...
	I0817 22:31:20.616646  254975 pod_ready.go:38] duration metric: took 5.025620457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 22:31:20.616695  254975 api_server.go:52] waiting for apiserver process to appear ...
	I0817 22:31:20.616748  254975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 22:31:20.633102  254975 api_server.go:72] duration metric: took 5.352419031s to wait for apiserver process to appear ...
	I0817 22:31:20.633131  254975 api_server.go:88] waiting for apiserver healthz status ...
	I0817 22:31:20.633152  254975 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8443/healthz ...
	I0817 22:31:20.640585  254975 api_server.go:279] https://192.168.72.56:8443/healthz returned 200:
	ok
	I0817 22:31:20.641784  254975 api_server.go:141] control plane version: v1.16.0
	I0817 22:31:20.641807  254975 api_server.go:131] duration metric: took 8.66923ms to wait for apiserver health ...
	I0817 22:31:20.641815  254975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 22:31:20.647851  254975 system_pods.go:59] 4 kube-system pods found
	I0817 22:31:20.647904  254975 system_pods.go:61] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.647909  254975 system_pods.go:61] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.647917  254975 system_pods.go:61] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.647923  254975 system_pods.go:61] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.647929  254975 system_pods.go:74] duration metric: took 6.108947ms to wait for pod list to return data ...
	I0817 22:31:20.647937  254975 default_sa.go:34] waiting for default service account to be created ...
	I0817 22:31:20.651451  254975 default_sa.go:45] found service account: "default"
	I0817 22:31:20.651485  254975 default_sa.go:55] duration metric: took 3.540013ms for default service account to be created ...
	I0817 22:31:20.651496  254975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 22:31:20.655529  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.655556  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.655561  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.655567  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.655575  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.655593  254975 retry.go:31] will retry after 194.203175ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:20.860033  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:20.860063  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:20.860069  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:20.860076  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:20.860082  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:20.860098  254975 retry.go:31] will retry after 273.217607ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.138457  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.138483  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.138488  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.138494  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.138501  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.138520  254975 retry.go:31] will retry after 311.999616ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.455473  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.455507  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.455513  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.455519  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.455526  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.455542  254975 retry.go:31] will retry after 462.378441ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:21.922656  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:21.922695  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:21.922703  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:21.922714  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:21.922724  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:21.922743  254975 retry.go:31] will retry after 595.850716ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:22.525024  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:22.525067  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:22.525076  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:22.525087  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:22.525100  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:22.525123  254975 retry.go:31] will retry after 916.880182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:23.446648  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:23.446678  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:23.446684  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:23.446691  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:23.446697  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:23.446717  254975 retry.go:31] will retry after 1.080769148s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:24.532239  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:24.532270  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:24.532277  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:24.532287  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:24.532296  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:24.532325  254975 retry.go:31] will retry after 1.261174641s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:25.798397  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:25.798430  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:25.798435  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:25.798442  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:25.798449  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:25.798465  254975 retry.go:31] will retry after 1.383083099s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:27.187782  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:27.187816  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:27.187821  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:27.187828  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:27.187834  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:27.187852  254975 retry.go:31] will retry after 1.954135672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:29.148294  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:29.148325  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:29.148330  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:29.148337  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:29.148344  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:29.148359  254975 retry.go:31] will retry after 2.632641562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:31.786946  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:31.786981  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:31.786988  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:31.786998  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:31.787010  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:31.787030  254975 retry.go:31] will retry after 3.626446493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:35.421023  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:35.421053  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:35.421059  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:35.421065  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:35.421072  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:35.421089  254975 retry.go:31] will retry after 2.800907689s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:38.228118  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:38.228155  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:38.228165  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:38.228177  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:38.228187  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:38.228204  254975 retry.go:31] will retry after 3.699626464s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:41.932868  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:41.932902  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:41.932908  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:41.932915  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:41.932922  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:41.932939  254975 retry.go:31] will retry after 6.965217948s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:48.913824  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:48.913866  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:48.913875  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:48.913899  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:48.913909  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:48.913931  254975 retry.go:31] will retry after 7.880328521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:31:56.800829  254975 system_pods.go:86] 4 kube-system pods found
	I0817 22:31:56.800868  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:31:56.800876  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:31:56.800887  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:31:56.800893  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:31:56.800915  254975 retry.go:31] will retry after 7.054585059s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0817 22:32:03.878268  254975 system_pods.go:86] 7 kube-system pods found
	I0817 22:32:03.878297  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:03.878304  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Pending
	I0817 22:32:03.878308  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Pending
	I0817 22:32:03.878311  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:03.878316  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:03.878324  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:03.878331  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:03.878351  254975 retry.go:31] will retry after 13.129481457s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0817 22:32:17.015570  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:17.015609  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:17.015619  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:17.015627  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:17.015634  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Pending
	I0817 22:32:17.015640  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:17.015647  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:17.015672  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:17.015682  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:17.015709  254975 retry.go:31] will retry after 15.332291563s: missing components: kube-controller-manager
	I0817 22:32:32.354549  254975 system_pods.go:86] 8 kube-system pods found
	I0817 22:32:32.354587  254975 system_pods.go:89] "coredns-5644d7b6d9-b9p7t" [979f2255-c423-4ef7-b95b-9c141e84d92c] Running
	I0817 22:32:32.354596  254975 system_pods.go:89] "etcd-old-k8s-version-294781" [aea94f0c-a880-4903-82a2-66308b001d80] Running
	I0817 22:32:32.354603  254975 system_pods.go:89] "kube-apiserver-old-k8s-version-294781" [f5e58f4c-9a3a-4a14-854f-24d60543910f] Running
	I0817 22:32:32.354613  254975 system_pods.go:89] "kube-controller-manager-old-k8s-version-294781" [ba1da748-6462-49ed-807a-8ea15c6a4778] Running
	I0817 22:32:32.354619  254975 system_pods.go:89] "kube-proxy-44jmp" [8e2b139e-7ff6-4dcc-8d80-d62b4096033f] Running
	I0817 22:32:32.354626  254975 system_pods.go:89] "kube-scheduler-old-k8s-version-294781" [c956f258-136a-4c58-8ced-ebe4fbc3427e] Running
	I0817 22:32:32.354637  254975 system_pods.go:89] "metrics-server-74d5856cc6-4nqrx" [0984dab2-6245-4726-b46f-5d926ac1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 22:32:32.354646  254975 system_pods.go:89] "storage-provisioner" [09ae46d9-b7ba-47b7-b28b-65699755f428] Running
	I0817 22:32:32.354657  254975 system_pods.go:126] duration metric: took 1m11.703154434s to wait for k8s-apps to be running ...
	I0817 22:32:32.354700  254975 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 22:32:32.354766  254975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 22:32:32.372492  254975 system_svc.go:56] duration metric: took 17.765249ms WaitForService to wait for kubelet.
	I0817 22:32:32.372541  254975 kubeadm.go:581] duration metric: took 1m17.091866023s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 22:32:32.372573  254975 node_conditions.go:102] verifying NodePressure condition ...
	I0817 22:32:32.377413  254975 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0817 22:32:32.377442  254975 node_conditions.go:123] node cpu capacity is 2
	I0817 22:32:32.377455  254975 node_conditions.go:105] duration metric: took 4.875282ms to run NodePressure ...
	I0817 22:32:32.377467  254975 start.go:228] waiting for startup goroutines ...
	I0817 22:32:32.377473  254975 start.go:233] waiting for cluster config update ...
	I0817 22:32:32.377483  254975 start.go:242] writing updated cluster config ...
	I0817 22:32:32.377828  254975 ssh_runner.go:195] Run: rm -f paused
	I0817 22:32:32.433865  254975 start.go:600] kubectl: 1.28.0, cluster: 1.16.0 (minor skew: 12)
	I0817 22:32:32.436131  254975 out.go:177] 
	W0817 22:32:32.437621  254975 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0817 22:32:32.439072  254975 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0817 22:32:32.440794  254975 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-294781" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-08-17 22:25:10 UTC, ends at Thu 2023-08-17 22:43:26 UTC. --
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.026137558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=42ec057a-6686-4089-a39c-3583bfc341c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.026450909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=42ec057a-6686-4089-a39c-3583bfc341c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.065776652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=281cdb90-fa6c-40d5-ae4b-566a7e94a78c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.065870030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=281cdb90-fa6c-40d5-ae4b-566a7e94a78c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.066027465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=281cdb90-fa6c-40d5-ae4b-566a7e94a78c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.102710675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=757e15d5-d9aa-42ee-89da-a19a3473a3f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.102813382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=757e15d5-d9aa-42ee-89da-a19a3473a3f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.103042368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=757e15d5-d9aa-42ee-89da-a19a3473a3f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.142616958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6934fb54-10b7-4421-acbd-c06ced40fab1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.142746740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6934fb54-10b7-4421-acbd-c06ced40fab1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.142996986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6934fb54-10b7-4421-acbd-c06ced40fab1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.188137906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1ef062a3-2258-41e0-9bea-28e05309945e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.188235242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1ef062a3-2258-41e0-9bea-28e05309945e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.188403468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1ef062a3-2258-41e0-9bea-28e05309945e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.215758755Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=5410d390-5234-468e-b60d-926ff785dbc1 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.215977720Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5786bd76fc307fa791bc4d96c540d6ef66c73e06ce4f9116b3fcf388e641ba95,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-4nqrx,Uid:0984dab2-6245-4726-b46f-5d926ac1acaf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311478055421968,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-4nqrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0984dab2-6245-4726-b46f-5d926ac1acaf,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:31:17.702671986Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-b9p7t,Uid:979f2255-c423-4ef7-b95b-9c141
e84d92c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311477653987573,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:31:16.408445811Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:09ae46d9-b7ba-47b7-b28b-65699755f428,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311477055248225,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-6
5699755f428,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-17T22:31:16.701859023Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&PodSandboxMetadata{Name:kube-proxy-44jmp,Uid:8e2b139e-7ff6-4dcc-8d8
0-d62b4096033f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311475787113647,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b139e-7ff6-4dcc-8d80-d62b4096033f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-17T22:31:15.431289342Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-294781,Uid:15ef7d9f6ef071876690b2a7113c9a02,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311449413362428,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,tier: contr
ol-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15ef7d9f6ef071876690b2a7113c9a02,kubernetes.io/config.seen: 2023-08-17T22:30:48.99912296Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-294781,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311449391586596,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-08-17T22:30:48.992868778Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6156d77a5afd47d3ac7b80301b79e
676f31233cac3892075a0bc9d6d0055c28f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-294781,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311449373968919,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-08-17T22:30:48.995995751Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-294781,Uid:628e53643ab6d4e9922be9725875c975,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1692311449337713643,Labels:map[string]string{component: kube-apiserver,io.
kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 628e53643ab6d4e9922be9725875c975,kubernetes.io/config.seen: 2023-08-17T22:30:48.979967962Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=5410d390-5234-468e-b60d-926ff785dbc1 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.216707330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=780695e2-0aae-4a0d-bc95-82d85f52fb68 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.216780078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=780695e2-0aae-4a0d-bc95-82d85f52fb68 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.217008527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=780695e2-0aae-4a0d-bc95-82d85f52fb68 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.228951132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=12b31826-86bf-435b-8281-1087e2871c16 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.229063713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=12b31826-86bf-435b-8281-1087e2871c16 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.229214514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=12b31826-86bf-435b-8281-1087e2871c16 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.260227439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a8819134-cd67-4860-9aed-3a3fe064dc6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.260312611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a8819134-cd67-4860-9aed-3a3fe064dc6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 17 22:43:26 old-k8s-version-294781 crio[711]: time="2023-08-17 22:43:26.260682090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263,PodSandboxId:dd0043023ef6559e8fa7d6e1b12616132efed3067d2796dd8795e60feb20cc59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1692311477867479627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ae46d9-b7ba-47b7-b28b-65699755f428,},Annotations:map[string]string{io.kubernetes.container.hash: d51a6cb,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38,PodSandboxId:edbf0319fa5c65a2d5237a7012f332c7deff6d5cfc9ddcb2fc3c712388db3d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1692311477963058941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-b9p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 979f2255-c423-4ef7-b95b-9c141e84d92c,},Annotations:map[string]string{io.kubernetes.container.hash: 39c56395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb,PodSandboxId:d10bb4533f5b3cb20d9ccb5edc44985d84bb31a4d8c5bab48a1868513c20549f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1692311477268920572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-44jmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2b13
9e-7ff6-4dcc-8d80-d62b4096033f,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebe6f83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6,PodSandboxId:815dd72984ffce2209e32492c4d5252b18132174d821404c6b048a97c637aa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1692311451159049266,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ef7d9f6ef071876690b2a7113c9a02,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f8b145fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731,PodSandboxId:6156d77a5afd47d3ac7b80301b79e676f31233cac3892075a0bc9d6d0055c28f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1692311450281982179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string
{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d,PodSandboxId:13f126d501c3f2ffa2d75938bc6eae92f83b8c58419425578517368952acdd5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1692311449998599184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628e53643ab6d4e9922be9725875c975,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9cbf94c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e,PodSandboxId:b280926ea75f5d1798621b7a818b7196d5f2f71e143e37cf373d37afa846dc42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1692311449977471990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-294781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a8819134-cd67-4860-9aed-3a3fe064dc6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	c3a070079a5db       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   edbf0319fa5c6
	1028581d1dbc5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   dd0043023ef65
	7b1fa03e7d897       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   d10bb4533f5b3
	72077201639f7       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   12 minutes ago      Running             etcd                      0                   815dd72984ffc
	69cb530e82258       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   12 minutes ago      Running             kube-scheduler            0                   6156d77a5afd4
	c790de9f398ee       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   12 minutes ago      Running             kube-apiserver            0                   13f126d501c3f
	08d224b61e1f0       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   12 minutes ago      Running             kube-controller-manager   0                   b280926ea75f5
	
	* 
	* ==> coredns [c3a070079a5dbf89419a760dc6dbf004565b56f92280c892741d8ed43f33de38] <==
	* .:53
	2023-08-17T22:31:18.297Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-08-17T22:31:18.297Z [INFO] CoreDNS-1.6.2
	2023-08-17T22:31:18.297Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-08-17T22:31:18.312Z [INFO] 127.0.0.1:51858 - 56078 "HINFO IN 1054767967733793581.3854523874822987122. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014180577s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-294781
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-294781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=old-k8s-version-294781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T22_31_00_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 22:30:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 22:42:55 +0000   Thu, 17 Aug 2023 22:30:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 22:42:55 +0000   Thu, 17 Aug 2023 22:30:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 22:42:55 +0000   Thu, 17 Aug 2023 22:30:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 22:42:55 +0000   Thu, 17 Aug 2023 22:30:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.56
	  Hostname:    old-k8s-version-294781
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 a33cc6505fd84c9f9ec3652fc9a21038
	 System UUID:                a33cc650-5fd8-4c9f-9ec3-652fc9a21038
	 Boot ID:                    1570635a-ff79-481b-860b-640904c2786a
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-b9p7t                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-294781                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-294781             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-294781    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-44jmp                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-294781             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                metrics-server-74d5856cc6-4nqrx                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-294781     Node old-k8s-version-294781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-294781     Node old-k8s-version-294781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-294781     Node old-k8s-version-294781 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-294781  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug17 22:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.101490] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.088903] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.579793] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154093] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.610403] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.094998] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.131091] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.139481] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.109287] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.248419] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +20.247432] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[  +0.470018] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +15.997654] kauditd_printk_skb: 3 callbacks suppressed
	[Aug17 22:26] kauditd_printk_skb: 2 callbacks suppressed
	[Aug17 22:30] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.618009] systemd-fstab-generator[3220]: Ignoring "noauto" for root device
	[Aug17 22:31] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [72077201639f70b06df2cad73697cc715a7284c581a80acc059c3198ce499ce6] <==
	* 2023-08-17 22:30:51.364126 I | raft: 3a1fc7f0094834a7 became follower at term 0
	2023-08-17 22:30:51.364158 I | raft: newRaft 3a1fc7f0094834a7 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-08-17 22:30:51.364174 I | raft: 3a1fc7f0094834a7 became follower at term 1
	2023-08-17 22:30:51.375653 W | auth: simple token is not cryptographically signed
	2023-08-17 22:30:51.382306 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-08-17 22:30:51.384378 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-17 22:30:51.384649 I | embed: listening for metrics on http://192.168.72.56:2381
	2023-08-17 22:30:51.384907 I | etcdserver: 3a1fc7f0094834a7 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-08-17 22:30:51.385590 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-17 22:30:51.385987 I | etcdserver/membership: added member 3a1fc7f0094834a7 [https://192.168.72.56:2380] to cluster 2e67888e462b31f7
	2023-08-17 22:30:51.964723 I | raft: 3a1fc7f0094834a7 is starting a new election at term 1
	2023-08-17 22:30:51.964786 I | raft: 3a1fc7f0094834a7 became candidate at term 2
	2023-08-17 22:30:51.964806 I | raft: 3a1fc7f0094834a7 received MsgVoteResp from 3a1fc7f0094834a7 at term 2
	2023-08-17 22:30:51.964822 I | raft: 3a1fc7f0094834a7 became leader at term 2
	2023-08-17 22:30:51.964829 I | raft: raft.node: 3a1fc7f0094834a7 elected leader 3a1fc7f0094834a7 at term 2
	2023-08-17 22:30:51.965079 I | etcdserver: setting up the initial cluster version to 3.3
	2023-08-17 22:30:51.966741 I | etcdserver: published {Name:old-k8s-version-294781 ClientURLs:[https://192.168.72.56:2379]} to cluster 2e67888e462b31f7
	2023-08-17 22:30:51.966989 I | embed: ready to serve client requests
	2023-08-17 22:30:51.967282 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-08-17 22:30:51.967374 I | etcdserver/api: enabled capabilities for version 3.3
	2023-08-17 22:30:51.967426 I | embed: ready to serve client requests
	2023-08-17 22:30:51.968326 I | embed: serving client requests on 192.168.72.56:2379
	2023-08-17 22:30:51.968498 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-17 22:40:51.993720 I | mvcc: store.index: compact 665
	2023-08-17 22:40:51.995985 I | mvcc: finished scheduled compaction at 665 (took 1.802222ms)
	
	* 
	* ==> kernel <==
	*  22:43:26 up 18 min,  0 users,  load average: 0.16, 0.14, 0.16
	Linux old-k8s-version-294781 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c790de9f398ee799bf8079021a10afec4b370381feb4cfd54adb68099d5aa07d] <==
	* I0817 22:35:56.376209       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:35:56.376364       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:35:56.376451       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:35:56.376464       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:36:56.376875       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:36:56.377205       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:36:56.377351       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:36:56.377399       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:38:56.378043       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:38:56.378156       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:38:56.378218       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:38:56.378226       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:40:56.381390       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:40:56.381563       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:40:56.381642       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:40:56.381668       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 22:41:56.382157       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0817 22:41:56.382355       1 handler_proxy.go:99] no RequestInfo found in the context
	E0817 22:41:56.382410       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 22:41:56.382436       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [08d224b61e1f09a9c7f80e74f9db8ad3de15f7c0502dd0e6e5bbdae5c6073b3e] <==
	* W0817 22:37:07.326845       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:37:18.203352       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:37:39.329113       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:37:48.455946       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:38:11.331190       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:38:18.708384       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:38:43.333876       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:38:48.961574       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:39:15.336389       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:39:19.213800       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:39:47.338956       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:39:49.465914       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:40:19.341408       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:40:19.718001       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0817 22:40:49.970113       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:40:51.343566       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:41:20.222656       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:41:23.346289       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:41:50.474819       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:41:55.348626       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:42:20.727150       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:42:27.350974       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:42:50.979403       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 22:42:59.352819       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 22:43:21.231599       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [7b1fa03e7d8979177102312e46d975d44b43a2237057ecc24a20d8a7963fb8eb] <==
	* W0817 22:31:17.746266       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0817 22:31:17.773304       1 node.go:135] Successfully retrieved node IP: 192.168.72.56
	I0817 22:31:17.773360       1 server_others.go:149] Using iptables Proxier.
	I0817 22:31:17.775856       1 server.go:529] Version: v1.16.0
	I0817 22:31:17.783620       1 config.go:313] Starting service config controller
	I0817 22:31:17.783680       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0817 22:31:17.783809       1 config.go:131] Starting endpoints config controller
	I0817 22:31:17.783853       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0817 22:31:17.887713       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0817 22:31:17.888190       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [69cb530e822589c1f14348e5a6603ef08cc4ed9f2b72721fbe79ecd428597731] <==
	* I0817 22:30:55.386607       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0817 22:30:55.386948       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0817 22:30:55.433097       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 22:30:55.433432       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 22:30:55.440045       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:55.440283       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 22:30:55.452906       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 22:30:55.453147       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 22:30:55.453376       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 22:30:55.453629       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 22:30:55.456944       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 22:30:55.457027       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 22:30:55.457279       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:56.434686       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 22:30:56.449732       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 22:30:56.451601       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 22:30:56.453272       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 22:30:56.455481       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 22:30:56.458659       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 22:30:56.460191       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 22:30:56.461432       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 22:30:56.462470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 22:30:56.463892       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 22:30:56.474979       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 22:31:14.898857       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-08-17 22:25:10 UTC, ends at Thu 2023-08-17 22:43:26 UTC. --
	Aug 17 22:38:58 old-k8s-version-294781 kubelet[3226]: E0817 22:38:58.835654    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:39:11 old-k8s-version-294781 kubelet[3226]: E0817 22:39:11.835882    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:39:24 old-k8s-version-294781 kubelet[3226]: E0817 22:39:24.835726    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:39:35 old-k8s-version-294781 kubelet[3226]: E0817 22:39:35.835310    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:39:47 old-k8s-version-294781 kubelet[3226]: E0817 22:39:47.835435    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:00 old-k8s-version-294781 kubelet[3226]: E0817 22:40:00.835842    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:14 old-k8s-version-294781 kubelet[3226]: E0817 22:40:14.835414    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:28 old-k8s-version-294781 kubelet[3226]: E0817 22:40:28.835710    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:41 old-k8s-version-294781 kubelet[3226]: E0817 22:40:41.835296    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:40:48 old-k8s-version-294781 kubelet[3226]: E0817 22:40:48.908728    3226 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Aug 17 22:40:52 old-k8s-version-294781 kubelet[3226]: E0817 22:40:52.835957    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:41:03 old-k8s-version-294781 kubelet[3226]: E0817 22:41:03.835404    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:41:17 old-k8s-version-294781 kubelet[3226]: E0817 22:41:17.839246    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:41:31 old-k8s-version-294781 kubelet[3226]: E0817 22:41:31.835355    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:41:46 old-k8s-version-294781 kubelet[3226]: E0817 22:41:46.835882    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:42:01 old-k8s-version-294781 kubelet[3226]: E0817 22:42:01.855778    3226 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 17 22:42:01 old-k8s-version-294781 kubelet[3226]: E0817 22:42:01.855901    3226 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 17 22:42:01 old-k8s-version-294781 kubelet[3226]: E0817 22:42:01.855961    3226 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 17 22:42:01 old-k8s-version-294781 kubelet[3226]: E0817 22:42:01.855998    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Aug 17 22:42:15 old-k8s-version-294781 kubelet[3226]: E0817 22:42:15.835462    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:42:27 old-k8s-version-294781 kubelet[3226]: E0817 22:42:27.835448    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:42:39 old-k8s-version-294781 kubelet[3226]: E0817 22:42:39.835045    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:42:50 old-k8s-version-294781 kubelet[3226]: E0817 22:42:50.835375    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:43:03 old-k8s-version-294781 kubelet[3226]: E0817 22:43:03.835362    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 17 22:43:18 old-k8s-version-294781 kubelet[3226]: E0817 22:43:18.836061    3226 pod_workers.go:191] Error syncing pod 0984dab2-6245-4726-b46f-5d926ac1acaf ("metrics-server-74d5856cc6-4nqrx_kube-system(0984dab2-6245-4726-b46f-5d926ac1acaf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [1028581d1dbc5ec382eebb23a1c3276603fc32052151f1c6d5d5fbcecf4b9263] <==
	* I0817 22:31:18.313355       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 22:31:18.326286       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 22:31:18.326401       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 22:31:18.354240       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 22:31:18.355403       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-294781_65043a18-cf2f-4327-a44f-39d4d4062b92!
	I0817 22:31:18.356445       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ad1d782-fad3-4aeb-a59b-781a98197afa", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-294781_65043a18-cf2f-4327-a44f-39d4d4062b92 became leader
	I0817 22:31:18.456221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-294781_65043a18-cf2f-4327-a44f-39d4d4062b92!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-294781 -n old-k8s-version-294781
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-294781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-4nqrx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-294781 describe pod metrics-server-74d5856cc6-4nqrx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-294781 describe pod metrics-server-74d5856cc6-4nqrx: exit status 1 (72.524497ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-4nqrx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-294781 describe pod metrics-server-74d5856cc6-4nqrx: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (111.94s)

                                                
                                    

Test pass (235/300)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.22
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.4/json-events 4.91
11 TestDownloadOnly/v1.27.4/preload-exists 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.06
17 TestDownloadOnly/v1.28.0-rc.1/json-events 5.36
18 TestDownloadOnly/v1.28.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.28.0-rc.1/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
26 TestBinaryMirror 0.56
27 TestOffline 65.17
29 TestAddons/Setup 145.29
31 TestAddons/parallel/Registry 17.3
33 TestAddons/parallel/InspektorGadget 12.3
34 TestAddons/parallel/MetricsServer 6.29
35 TestAddons/parallel/HelmTiller 13.55
37 TestAddons/parallel/CSI 61.28
38 TestAddons/parallel/Headlamp 15.41
39 TestAddons/parallel/CloudSpanner 5.98
42 TestAddons/serial/GCPAuth/Namespaces 0.14
44 TestCertOptions 110.34
45 TestCertExpiration 300.34
47 TestForceSystemdFlag 122.3
48 TestForceSystemdEnv 80.09
50 TestKVMDriverInstallOrUpdate 1.58
54 TestErrorSpam/setup 47.52
55 TestErrorSpam/start 0.35
56 TestErrorSpam/status 0.74
57 TestErrorSpam/pause 1.48
58 TestErrorSpam/unpause 1.72
59 TestErrorSpam/stop 2.23
62 TestFunctional/serial/CopySyncFile 0
63 TestFunctional/serial/StartWithProxy 62.54
64 TestFunctional/serial/AuditLog 0
65 TestFunctional/serial/SoftStart 33.43
66 TestFunctional/serial/KubeContext 0.05
67 TestFunctional/serial/KubectlGetPods 0.09
70 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
71 TestFunctional/serial/CacheCmd/cache/add_local 1.12
72 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
73 TestFunctional/serial/CacheCmd/cache/list 0.05
74 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
75 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
76 TestFunctional/serial/CacheCmd/cache/delete 0.1
77 TestFunctional/serial/MinikubeKubectlCmd 0.11
78 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
79 TestFunctional/serial/ExtraConfig 35.17
80 TestFunctional/serial/ComponentHealth 0.08
81 TestFunctional/serial/LogsCmd 1.44
83 TestFunctional/serial/InvalidService 5.42
85 TestFunctional/parallel/ConfigCmd 0.32
86 TestFunctional/parallel/DashboardCmd 59.35
87 TestFunctional/parallel/DryRun 0.31
88 TestFunctional/parallel/InternationalLanguage 0.15
89 TestFunctional/parallel/StatusCmd 1.22
93 TestFunctional/parallel/ServiceCmdConnect 12.76
94 TestFunctional/parallel/AddonsCmd 0.15
95 TestFunctional/parallel/PersistentVolumeClaim 52.46
97 TestFunctional/parallel/SSHCmd 0.46
98 TestFunctional/parallel/CpCmd 1.05
99 TestFunctional/parallel/MySQL 30.18
100 TestFunctional/parallel/FileSync 0.26
101 TestFunctional/parallel/CertSync 1.48
105 TestFunctional/parallel/NodeLabels 0.07
107 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
109 TestFunctional/parallel/License 0.14
110 TestFunctional/parallel/ServiceCmd/DeployApp 12.27
111 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
112 TestFunctional/parallel/ProfileCmd/profile_list 0.33
113 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
114 TestFunctional/parallel/MountCmd/any-port 9.96
115 TestFunctional/parallel/MountCmd/specific-port 1.93
116 TestFunctional/parallel/ServiceCmd/List 0.35
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
119 TestFunctional/parallel/MountCmd/VerifyCleanup 1.72
120 TestFunctional/parallel/ServiceCmd/Format 0.38
121 TestFunctional/parallel/ServiceCmd/URL 0.34
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
125 TestFunctional/parallel/Version/short 0.05
126 TestFunctional/parallel/Version/components 0.91
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.81
141 TestFunctional/parallel/ImageCommands/Setup 0.96
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 9.28
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.7
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.74
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.01
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.92
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.26
149 TestFunctional/delete_addon-resizer_images 0.07
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 114.75
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.62
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
162 TestJSONOutput/start/Command 62.45
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.7
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.64
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 7.09
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.2
190 TestMainNoArgs 0.05
191 TestMinikubeProfile 101.37
194 TestMountStart/serial/StartWithMountFirst 29.67
195 TestMountStart/serial/VerifyMountFirst 0.39
196 TestMountStart/serial/StartWithMountSecond 28.88
197 TestMountStart/serial/VerifyMountSecond 0.39
198 TestMountStart/serial/DeleteFirst 0.91
199 TestMountStart/serial/VerifyMountPostDelete 0.39
200 TestMountStart/serial/Stop 1.23
201 TestMountStart/serial/RestartStopped 23.12
202 TestMountStart/serial/VerifyMountPostStop 0.39
205 TestMultiNode/serial/FreshStart2Nodes 114.43
206 TestMultiNode/serial/DeployApp2Nodes 4.84
208 TestMultiNode/serial/AddNode 40.5
209 TestMultiNode/serial/ProfileList 0.21
210 TestMultiNode/serial/CopyFile 7.54
211 TestMultiNode/serial/StopNode 2.99
212 TestMultiNode/serial/StartAfterStop 32.69
214 TestMultiNode/serial/DeleteNode 1.81
216 TestMultiNode/serial/RestartMultiNode 441.59
217 TestMultiNode/serial/ValidateNameConflict 49.21
224 TestScheduledStopUnix 116.53
230 TestKubernetesUpgrade 155.8
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
237 TestNoKubernetes/serial/StartWithK8s 103.01
242 TestNetworkPlugins/group/false 3.04
246 TestNoKubernetes/serial/StartWithStopK8s 75.36
247 TestNoKubernetes/serial/Start 28.3
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
249 TestNoKubernetes/serial/ProfileList 0.76
250 TestNoKubernetes/serial/Stop 1.35
251 TestNoKubernetes/serial/StartNoArgs 45.13
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
253 TestStoppedBinaryUpgrade/Setup 0.35
263 TestPause/serial/Start 125.97
264 TestNetworkPlugins/group/auto/Start 109.23
265 TestNetworkPlugins/group/kindnet/Start 85.12
266 TestPause/serial/SecondStartNoReconfiguration 31.3
267 TestPause/serial/Pause 0.72
268 TestPause/serial/VerifyStatus 0.25
269 TestPause/serial/Unpause 0.64
270 TestPause/serial/PauseAgain 0.96
271 TestPause/serial/DeletePaused 1
272 TestPause/serial/VerifyDeletedResources 0.77
273 TestNetworkPlugins/group/calico/Start 93.9
274 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
275 TestNetworkPlugins/group/auto/KubeletFlags 0.22
276 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
277 TestNetworkPlugins/group/auto/NetCatPod 12.4
278 TestNetworkPlugins/group/kindnet/NetCatPod 11.46
279 TestNetworkPlugins/group/kindnet/DNS 0.22
280 TestNetworkPlugins/group/kindnet/Localhost 0.2
281 TestNetworkPlugins/group/kindnet/HairPin 0.22
282 TestNetworkPlugins/group/auto/DNS 0.28
283 TestNetworkPlugins/group/auto/Localhost 0.21
284 TestNetworkPlugins/group/auto/HairPin 0.19
285 TestNetworkPlugins/group/custom-flannel/Start 90.04
286 TestNetworkPlugins/group/enable-default-cni/Start 135.98
287 TestNetworkPlugins/group/calico/ControllerPod 5.03
288 TestNetworkPlugins/group/calico/KubeletFlags 0.22
289 TestNetworkPlugins/group/calico/NetCatPod 13.44
290 TestNetworkPlugins/group/calico/DNS 0.23
291 TestNetworkPlugins/group/calico/Localhost 0.21
292 TestNetworkPlugins/group/calico/HairPin 0.23
293 TestNetworkPlugins/group/flannel/Start 103.96
294 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
295 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.5
296 TestNetworkPlugins/group/custom-flannel/DNS 0.17
297 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
298 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
299 TestStoppedBinaryUpgrade/MinikubeLogs 0.43
300 TestNetworkPlugins/group/bridge/Start 126.45
302 TestStartStop/group/old-k8s-version/serial/FirstStart 189.99
303 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
304 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.52
305 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
306 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
307 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
309 TestStartStop/group/no-preload/serial/FirstStart 145
310 TestNetworkPlugins/group/flannel/ControllerPod 5.04
311 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
312 TestNetworkPlugins/group/flannel/NetCatPod 13.47
313 TestNetworkPlugins/group/flannel/DNS 0.19
314 TestNetworkPlugins/group/flannel/Localhost 0.16
315 TestNetworkPlugins/group/flannel/HairPin 0.16
317 TestStartStop/group/embed-certs/serial/FirstStart 106.9
318 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
319 TestNetworkPlugins/group/bridge/NetCatPod 13.61
320 TestNetworkPlugins/group/bridge/DNS 0.19
321 TestNetworkPlugins/group/bridge/Localhost 0.15
322 TestNetworkPlugins/group/bridge/HairPin 0.15
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 102.57
325 TestStartStop/group/old-k8s-version/serial/DeployApp 8.5
326 TestStartStop/group/no-preload/serial/DeployApp 9.54
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
329 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
331 TestStartStop/group/embed-certs/serial/DeployApp 9.44
332 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.51
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
339 TestStartStop/group/old-k8s-version/serial/SecondStart 795.97
340 TestStartStop/group/no-preload/serial/SecondStart 617.88
342 TestStartStop/group/embed-certs/serial/SecondStart 636.65
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 624.98
354 TestStartStop/group/newest-cni/serial/FirstStart 60.78
355 TestStartStop/group/newest-cni/serial/DeployApp 0
356 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.85
357 TestStartStop/group/newest-cni/serial/Stop 11.12
358 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
359 TestStartStop/group/newest-cni/serial/SecondStart 52.02
360 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
361 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
363 TestStartStop/group/newest-cni/serial/Pause 2.67
x
+
TestDownloadOnly/v1.16.0/json-events (6.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-936342 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-936342 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.217156972s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-936342
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-936342: exit status 85 (61.79856ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-936342        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:10:26
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:10:26.442389  210682 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:10:26.442506  210682 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:26.442514  210682 out.go:309] Setting ErrFile to fd 2...
	I0817 21:10:26.442519  210682 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:26.442716  210682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	W0817 21:10:26.442829  210682 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16865-203458/.minikube/config/config.json: open /home/jenkins/minikube-integration/16865-203458/.minikube/config/config.json: no such file or directory
	I0817 21:10:26.443529  210682 out.go:303] Setting JSON to true
	I0817 21:10:26.444385  210682 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21152,"bootTime":1692285475,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:10:26.444446  210682 start.go:138] virtualization: kvm guest
	I0817 21:10:26.447313  210682 out.go:97] [download-only-936342] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:10:26.449011  210682 out.go:169] MINIKUBE_LOCATION=16865
	W0817 21:10:26.447441  210682 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball: no such file or directory
	I0817 21:10:26.447493  210682 notify.go:220] Checking for updates...
	I0817 21:10:26.452806  210682 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:10:26.454561  210682 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:10:26.456044  210682 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:10:26.458389  210682 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0817 21:10:26.461425  210682 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0817 21:10:26.461740  210682 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:10:26.498092  210682 out.go:97] Using the kvm2 driver based on user configuration
	I0817 21:10:26.498130  210682 start.go:298] selected driver: kvm2
	I0817 21:10:26.498138  210682 start.go:902] validating driver "kvm2" against <nil>
	I0817 21:10:26.498494  210682 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:10:26.498632  210682 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 21:10:26.515605  210682 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 21:10:26.515677  210682 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:10:26.516217  210682 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0817 21:10:26.516386  210682 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0817 21:10:26.516425  210682 cni.go:84] Creating CNI manager for ""
	I0817 21:10:26.516438  210682 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 21:10:26.516447  210682 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0817 21:10:26.516453  210682 start_flags.go:319] config:
	{Name:download-only-936342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-936342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:26.516681  210682 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:10:26.519148  210682 out.go:97] Downloading VM boot image ...
	I0817 21:10:26.519234  210682 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	E0817 21:10:26.573585  210682 iso.go:90] Unable to download https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso: getter: &{Ctx:context.Background Src:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso.sha256 Dst:/home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso.download Pwd: Mode:2 Umask:---------- Detectors:[0x4042198 0x4042198 0x4042198 0x4042198 0x4042198 0x4042198 0x4042198] Decompressors:map[bz2:0xc0004c46c0 gz:0xc0004c46c8 tar:0xc0004c4670 tar.bz2:0xc0004c4680 tar.gz:0xc0004c4690 tar.xz:0xc0004c46a0 tar.zst:0xc0004c46b0 tbz2:0xc0004c4680 tgz:0xc0004c4690 txz:0xc0004c46a0 tzst:0xc0004c46b0 xz:0xc0004c46d0 zip:0xc0004c46e0 zst:0xc0004c46d8] Getters:map[file:0xc001387940 http:0xc000dfe280 https:0xc000dfe2d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]
}: invalid checksum: Error downloading checksum file: bad response code: 404
	I0817 21:10:26.573670  210682 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:10:26.575850  210682 out.go:97] Downloading VM boot image ...
	I0817 21:10:26.575899  210682 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-amd64.iso?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0817 21:10:28.171306  210682 out.go:97] Starting control plane node download-only-936342 in cluster download-only-936342
	I0817 21:10:28.171361  210682 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 21:10:28.196191  210682 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:28.196230  210682 cache.go:57] Caching tarball of preloaded images
	I0817 21:10:28.196408  210682 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 21:10:28.198426  210682 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0817 21:10:28.198447  210682 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:28.234909  210682 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:31.177545  210682 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:31.177638  210682 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-936342"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (4.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-936342 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-936342 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.911816854s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (4.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-936342
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-936342: exit status 85 (62.337723ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-936342        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-936342        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:10:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:10:32.724935  210728 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:10:32.725054  210728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:32.725062  210728 out.go:309] Setting ErrFile to fd 2...
	I0817 21:10:32.725067  210728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:32.725269  210728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	W0817 21:10:32.725388  210728 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16865-203458/.minikube/config/config.json: open /home/jenkins/minikube-integration/16865-203458/.minikube/config/config.json: no such file or directory
	I0817 21:10:32.725844  210728 out.go:303] Setting JSON to true
	I0817 21:10:32.726735  210728 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21158,"bootTime":1692285475,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:10:32.726817  210728 start.go:138] virtualization: kvm guest
	I0817 21:10:32.729176  210728 out.go:97] [download-only-936342] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:10:32.730801  210728 out.go:169] MINIKUBE_LOCATION=16865
	I0817 21:10:32.729429  210728 notify.go:220] Checking for updates...
	I0817 21:10:32.734532  210728 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:10:32.736096  210728 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:10:32.737642  210728 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:10:32.739217  210728 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-936342"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/json-events (5.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-936342 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-936342 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.364068891s)
--- PASS: TestDownloadOnly/v1.28.0-rc.1/json-events (5.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-936342
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-936342: exit status 85 (62.293665ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-936342           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-936342           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-936342 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-936342           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:10:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:10:37.699048  210772 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:10:37.699165  210772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:37.699174  210772 out.go:309] Setting ErrFile to fd 2...
	I0817 21:10:37.699178  210772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:37.699407  210772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	W0817 21:10:37.699533  210772 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16865-203458/.minikube/config/config.json: open /home/jenkins/minikube-integration/16865-203458/.minikube/config/config.json: no such file or directory
	I0817 21:10:37.700004  210772 out.go:303] Setting JSON to true
	I0817 21:10:37.700876  210772 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21163,"bootTime":1692285475,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:10:37.700943  210772 start.go:138] virtualization: kvm guest
	I0817 21:10:37.703231  210772 out.go:97] [download-only-936342] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:10:37.705056  210772 out.go:169] MINIKUBE_LOCATION=16865
	I0817 21:10:37.703438  210772 notify.go:220] Checking for updates...
	I0817 21:10:37.707984  210772 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:10:37.710557  210772 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:10:37.712089  210772 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:10:37.713671  210772 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0817 21:10:37.716475  210772 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0817 21:10:37.716879  210772 config.go:182] Loaded profile config "download-only-936342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	W0817 21:10:37.716921  210772 start.go:810] api.Load failed for download-only-936342: filestore "download-only-936342": Docker machine "download-only-936342" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:10:37.717006  210772 driver.go:373] Setting default libvirt URI to qemu:///system
	W0817 21:10:37.717037  210772 start.go:810] api.Load failed for download-only-936342: filestore "download-only-936342": Docker machine "download-only-936342" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:10:37.748626  210772 out.go:97] Using the kvm2 driver based on existing profile
	I0817 21:10:37.748663  210772 start.go:298] selected driver: kvm2
	I0817 21:10:37.748670  210772 start.go:902] validating driver "kvm2" against &{Name:download-only-936342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterNam
e:download-only-936342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:37.749181  210772 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:10:37.749265  210772 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16865-203458/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0817 21:10:37.764975  210772 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0817 21:10:37.765687  210772 cni.go:84] Creating CNI manager for ""
	I0817 21:10:37.765706  210772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0817 21:10:37.765717  210772 start_flags.go:319] config:
	{Name:download-only-936342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:download-only-936342 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:37.765935  210772 iso.go:125] acquiring lock: {Name:mke2a4949d961049e33e8fde0f72cb15b897f706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:10:37.767827  210772 out.go:97] Starting control plane node download-only-936342 in cluster download-only-936342
	I0817 21:10:37.767839  210772 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 21:10:37.822742  210772 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:37.822769  210772 cache.go:57] Caching tarball of preloaded images
	I0817 21:10:37.823055  210772 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 21:10:37.825419  210772 out.go:97] Downloading Kubernetes v1.28.0-rc.1 preload ...
	I0817 21:10:37.825458  210772 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:37.856039  210772 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:bb8ba69c7dfa450cc0765c8991e48fa2 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:40.659208  210772 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:40.659290  210772 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-203458/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:41.529364  210772 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on crio
	I0817 21:10:41.529537  210772 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/download-only-936342/config.json ...
	I0817 21:10:41.529760  210772 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 21:10:41.530004  210772 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16865-203458/.minikube/cache/linux/amd64/v1.28.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-936342"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-936342
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-984914 --alsologtostderr --binary-mirror http://127.0.0.1:40607 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-984914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-984914
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (65.17s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-263814 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-263814 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.124506335s)
helpers_test.go:175: Cleaning up "offline-crio-263814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-263814
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-263814: (1.046031227s)
--- PASS: TestOffline (65.17s)

                                                
                                    
x
+
TestAddons/Setup (145.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-696435 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-696435 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.294079535s)
--- PASS: TestAddons/Setup (145.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 26.599577ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9d6j4" [4c077f43-ad63-4dec-a59e-eb68f3db07da] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.034327265s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kk4lq" [5d6aa5a0-242c-4db5-834f-597e7cbe48df] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015473422s
addons_test.go:316: (dbg) Run:  kubectl --context addons-696435 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-696435 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-696435 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.28697938s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 ip
2023/08/17 21:13:25 [DEBUG] GET http://192.168.39.18:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.30s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-72649" [da453357-8e2f-4f79-820d-12ab6ab1a890] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.048365141s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-696435
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-696435: (7.255136049s)
--- PASS: TestAddons/parallel/InspektorGadget (12.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 26.494798ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7746886d4f-bl9hm" [06ae83cd-c41c-4ed2-83d0-3670567bffaf] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.032117127s
addons_test.go:391: (dbg) Run:  kubectl --context addons-696435 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-696435 addons disable metrics-server --alsologtostderr -v=1: (1.153525905s)
--- PASS: TestAddons/parallel/MetricsServer (6.29s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.55s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 26.468177ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-j68kn" [40c6973c-a1b0-4038-9fa8-bedc28b207f1] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.02778467s
addons_test.go:449: (dbg) Run:  kubectl --context addons-696435 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-696435 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.450806067s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p addons-696435 addons disable helm-tiller --alsologtostderr -v=1: (1.040690589s)
--- PASS: TestAddons/parallel/HelmTiller (13.55s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 11.11407ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-696435 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-696435 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [06c163de-c3fe-41da-af42-18ef24e165f0] Pending
helpers_test.go:344: "task-pv-pod" [06c163de-c3fe-41da-af42-18ef24e165f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [06c163de-c3fe-41da-af42-18ef24e165f0] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.026140361s
addons_test.go:560: (dbg) Run:  kubectl --context addons-696435 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-696435 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-696435 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-696435 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-696435 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-696435 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-696435 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-696435 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-696435 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0aa53f17-bfed-4d4d-9a56-186a10adbb62] Pending
helpers_test.go:344: "task-pv-pod-restore" [0aa53f17-bfed-4d4d-9a56-186a10adbb62] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0aa53f17-bfed-4d4d-9a56-186a10adbb62] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.027764892s
addons_test.go:602: (dbg) Run:  kubectl --context addons-696435 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-696435 delete pod task-pv-pod-restore: (1.387542668s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-696435 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-696435 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-696435 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.908308345s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-696435 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-696435 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-696435 --alsologtostderr -v=1: (2.373800332s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5c78f74d8d-95cck" [f114a63f-e310-41e3-a494-52ccb195bac0] Pending
helpers_test.go:344: "headlamp-5c78f74d8d-95cck" [f114a63f-e310-41e3-a494-52ccb195bac0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5c78f74d8d-95cck" [f114a63f-e310-41e3-a494-52ccb195bac0] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.032756895s
--- PASS: TestAddons/parallel/Headlamp (15.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-d67854dc9-jkshl" [b08497e1-8f16-4539-981f-3f308d7f7695] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.030931669s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-696435
--- PASS: TestAddons/parallel/CloudSpanner (5.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-696435 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-696435 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestCertOptions (110.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-727392 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-727392 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m48.49478947s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-727392 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-727392 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-727392 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-727392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-727392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-727392: (1.267382609s)
--- PASS: TestCertOptions (110.34s)

                                                
                                    
x
+
TestCertExpiration (300.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-185291 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-185291 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m33.452387201s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-185291 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-185291 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (25.731352611s)
helpers_test.go:175: Cleaning up "cert-expiration-185291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-185291
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-185291: (1.157266175s)
--- PASS: TestCertExpiration (300.34s)

                                                
                                    
x
+
TestForceSystemdFlag (122.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-435797 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0817 22:03:34.713631  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-435797 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m1.103605336s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-435797 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-435797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-435797
--- PASS: TestForceSystemdFlag (122.30s)

                                                
                                    
x
+
TestForceSystemdEnv (80.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-330284 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-330284 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.08238542s)
helpers_test.go:175: Cleaning up "force-systemd-env-330284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-330284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-330284: (1.003653602s)
--- PASS: TestForceSystemdEnv (80.09s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.58s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0817 22:05:31.665276  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (1.58s)

                                                
                                    
x
+
TestErrorSpam/setup (47.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-660083 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-660083 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-660083 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-660083 --driver=kvm2  --container-runtime=crio: (47.5225271s)
--- PASS: TestErrorSpam/setup (47.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (2.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 stop: (2.083314546s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660083 --log_dir /tmp/nospam-660083 stop
--- PASS: TestErrorSpam/stop (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16865-203458/.minikube/files/etc/test/nested/copy/210670/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-540012 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-540012 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m2.540670347s)
--- PASS: TestFunctional/serial/StartWithProxy (62.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-540012 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-540012 --alsologtostderr -v=8: (33.432539387s)
functional_test.go:659: soft start took 33.433271432s for "functional-540012" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-540012 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 cache add registry.k8s.io/pause:3.1: (1.042121891s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 cache add registry.k8s.io/pause:3.3: (1.109686928s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 cache add registry.k8s.io/pause:latest: (1.336707925s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-540012 /tmp/TestFunctionalserialCacheCmdcacheadd_local2277582726/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 cache add minikube-local-cache-test:functional-540012
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 cache delete minikube-local-cache-test:functional-540012
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-540012
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-540012 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (223.231946ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 kubectl -- --context functional-540012 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-540012 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-540012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-540012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.173537322s)
functional_test.go:757: restart took 35.173694456s for "functional-540012" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-540012 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 logs: (1.437176681s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-540012 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-540012
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-540012: exit status 115 (306.945838ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.42:31472 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-540012 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-540012 delete -f testdata/invalidsvc.yaml: (1.804981641s)
--- PASS: TestFunctional/serial/InvalidService (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-540012 config get cpus: exit status 14 (48.728309ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-540012 config get cpus: exit status 14 (47.698977ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (59.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-540012 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-540012 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 218114: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (59.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-540012 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-540012 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (161.64447ms)

                                                
                                                
-- stdout --
	* [functional-540012] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:22:21.883462  217554 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:22:21.883600  217554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:22:21.883609  217554 out.go:309] Setting ErrFile to fd 2...
	I0817 21:22:21.883613  217554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:22:21.883894  217554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 21:22:21.884602  217554 out.go:303] Setting JSON to false
	I0817 21:22:21.885629  217554 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21867,"bootTime":1692285475,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:22:21.885699  217554 start.go:138] virtualization: kvm guest
	I0817 21:22:21.892206  217554 out.go:177] * [functional-540012] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:22:21.894009  217554 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:22:21.893968  217554 notify.go:220] Checking for updates...
	I0817 21:22:21.897006  217554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:22:21.898602  217554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:22:21.900113  217554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:22:21.901667  217554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:22:21.903133  217554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:22:21.905330  217554 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:22:21.905828  217554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:22:21.905891  217554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:22:21.924625  217554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41607
	I0817 21:22:21.925143  217554 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:22:21.925875  217554 main.go:141] libmachine: Using API Version  1
	I0817 21:22:21.925906  217554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:22:21.926349  217554 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:22:21.926544  217554 main.go:141] libmachine: (functional-540012) Calling .DriverName
	I0817 21:22:21.926851  217554 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:22:21.927272  217554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:22:21.927347  217554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:22:21.944871  217554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
	I0817 21:22:21.945392  217554 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:22:21.945956  217554 main.go:141] libmachine: Using API Version  1
	I0817 21:22:21.945986  217554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:22:21.946443  217554 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:22:21.946645  217554 main.go:141] libmachine: (functional-540012) Calling .DriverName
	I0817 21:22:21.982544  217554 out.go:177] * Using the kvm2 driver based on existing profile
	I0817 21:22:21.984061  217554 start.go:298] selected driver: kvm2
	I0817 21:22:21.984076  217554 start.go:902] validating driver "kvm2" against &{Name:functional-540012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:f
unctional-540012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.42 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:22:21.984203  217554 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:22:21.986494  217554 out.go:177] 
	W0817 21:22:21.987933  217554 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0817 21:22:21.989430  217554 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-540012 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-540012 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-540012 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.161889ms)

                                                
                                                
-- stdout --
	* [functional-540012] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:22:21.725986  217510 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:22:21.726189  217510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:22:21.726201  217510 out.go:309] Setting ErrFile to fd 2...
	I0817 21:22:21.726208  217510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:22:21.726556  217510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 21:22:21.727179  217510 out.go:303] Setting JSON to false
	I0817 21:22:21.728243  217510 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21867,"bootTime":1692285475,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:22:21.728318  217510 start.go:138] virtualization: kvm guest
	I0817 21:22:21.731000  217510 out.go:177] * [functional-540012] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0817 21:22:21.734067  217510 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:22:21.734103  217510 notify.go:220] Checking for updates...
	I0817 21:22:21.736048  217510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:22:21.737586  217510 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 21:22:21.739213  217510 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 21:22:21.740885  217510 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:22:21.742438  217510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:22:21.744345  217510 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:22:21.744709  217510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:22:21.744767  217510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:22:21.760601  217510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0817 21:22:21.761104  217510 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:22:21.761711  217510 main.go:141] libmachine: Using API Version  1
	I0817 21:22:21.761741  217510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:22:21.762275  217510 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:22:21.762643  217510 main.go:141] libmachine: (functional-540012) Calling .DriverName
	I0817 21:22:21.762974  217510 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:22:21.763358  217510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:22:21.763410  217510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:22:21.778814  217510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33175
	I0817 21:22:21.779252  217510 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:22:21.779730  217510 main.go:141] libmachine: Using API Version  1
	I0817 21:22:21.779751  217510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:22:21.780066  217510 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:22:21.780305  217510 main.go:141] libmachine: (functional-540012) Calling .DriverName
	I0817 21:22:21.819858  217510 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0817 21:22:21.821371  217510 start.go:298] selected driver: kvm2
	I0817 21:22:21.821385  217510 start.go:902] validating driver "kvm2" against &{Name:functional-540012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:f
unctional-540012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.42 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:22:21.821497  217510 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:22:21.823920  217510 out.go:177] 
	W0817 21:22:21.825542  217510 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0817 21:22:21.827402  217510 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-540012 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-540012 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-dch9n" [be721b3a-37c3-42ea-b0a1-7cf3963ef182] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-dch9n" [be721b3a-37c3-42ea-b0a1-7cf3963ef182] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.058624572s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.42:32049
functional_test.go:1674: http://192.168.50.42:32049: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-dch9n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.42:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.42:32049
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1538aac2-17f8-4e9c-b16a-0802488eafc4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.025222747s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-540012 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-540012 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-540012 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-540012 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-540012 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d6d99085-ae4e-42d8-af9e-0b11447e3df4] Pending
helpers_test.go:344: "sp-pod" [d6d99085-ae4e-42d8-af9e-0b11447e3df4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d6d99085-ae4e-42d8-af9e-0b11447e3df4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.018551015s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-540012 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-540012 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-540012 delete -f testdata/storage-provisioner/pod.yaml: (1.594104792s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-540012 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0d0c8c0b-ec6d-4100-80c0-fdc13719b329] Pending
helpers_test.go:344: "sp-pod" [0d0c8c0b-ec6d-4100-80c0-fdc13719b329] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0d0c8c0b-ec6d-4100-80c0-fdc13719b329] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.027684201s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-540012 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh -n functional-540012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 cp functional-540012:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd626153401/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh -n functional-540012 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-540012 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-2gc96" [9b113f81-6829-42f9-9bb9-0216145194f4] Pending
helpers_test.go:344: "mysql-7db894d786-2gc96" [9b113f81-6829-42f9-9bb9-0216145194f4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-2gc96" [9b113f81-6829-42f9-9bb9-0216145194f4] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.082013693s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-540012 exec mysql-7db894d786-2gc96 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-540012 exec mysql-7db894d786-2gc96 -- mysql -ppassword -e "show databases;": exit status 1 (608.248347ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-540012 exec mysql-7db894d786-2gc96 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-540012 exec mysql-7db894d786-2gc96 -- mysql -ppassword -e "show databases;": exit status 1 (397.936858ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-540012 exec mysql-7db894d786-2gc96 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-540012 exec mysql-7db894d786-2gc96 -- mysql -ppassword -e "show databases;": exit status 1 (243.569094ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-540012 exec mysql-7db894d786-2gc96 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.18s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/210670/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo cat /etc/test/nested/copy/210670/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/210670.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo cat /etc/ssl/certs/210670.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/210670.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo cat /usr/share/ca-certificates/210670.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2106702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo cat /etc/ssl/certs/2106702.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2106702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo cat /usr/share/ca-certificates/2106702.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-540012 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-540012 ssh "sudo systemctl is-active docker": exit status 1 (246.006573ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-540012 ssh "sudo systemctl is-active containerd": exit status 1 (253.354266ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-540012 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-540012 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-999r6" [eef701f9-2db5-406a-aad3-68bde243599f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-999r6" [eef701f9-2db5-406a-aad3-68bde243599f] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.021749711s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "287.122374ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "47.147836ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "271.11534ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "46.585788ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdany-port1000834818/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1692307328914975622" to /tmp/TestFunctionalparallelMountCmdany-port1000834818/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1692307328914975622" to /tmp/TestFunctionalparallelMountCmdany-port1000834818/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1692307328914975622" to /tmp/TestFunctionalparallelMountCmdany-port1000834818/001/test-1692307328914975622
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.651419ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 17 21:22 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 17 21:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 17 21:22 test-1692307328914975622
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh cat /mount-9p/test-1692307328914975622
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-540012 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [658daadf-9b37-49f9-be61-d7f0099ee76d] Pending
helpers_test.go:344: "busybox-mount" [658daadf-9b37-49f9-be61-d7f0099ee76d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [658daadf-9b37-49f9-be61-d7f0099ee76d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [658daadf-9b37-49f9-be61-d7f0099ee76d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.040456343s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-540012 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdany-port1000834818/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdspecific-port2224806245/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.109911ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdspecific-port2224806245/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdspecific-port2224806245/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 service list -o json
functional_test.go:1493: Took "335.044903ms" to run "out/minikube-linux-amd64 -p functional-540012 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.42:32622
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1365358229/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1365358229/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1365358229/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T" /mount1: exit status 1 (319.342741ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-540012 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1365358229/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1365358229/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-540012 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1365358229/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.42:32622
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-540012 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-540012
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-540012
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-540012 image ls --format short --alsologtostderr:
I0817 21:22:57.267423  218807 out.go:296] Setting OutFile to fd 1 ...
I0817 21:22:57.267543  218807 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.267558  218807 out.go:309] Setting ErrFile to fd 2...
I0817 21:22:57.267565  218807 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.267797  218807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
I0817 21:22:57.268372  218807 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.268467  218807 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.268815  218807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.268884  218807 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.283399  218807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
I0817 21:22:57.283794  218807 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.284537  218807 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.284559  218807 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.284970  218807 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.285196  218807 main.go:141] libmachine: (functional-540012) Calling .GetState
I0817 21:22:57.287241  218807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.287304  218807 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.302107  218807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
I0817 21:22:57.302657  218807 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.303285  218807 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.303325  218807 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.303662  218807 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.303873  218807 main.go:141] libmachine: (functional-540012) Calling .DriverName
I0817 21:22:57.304075  218807 ssh_runner.go:195] Run: systemctl --version
I0817 21:22:57.304101  218807 main.go:141] libmachine: (functional-540012) Calling .GetSSHHostname
I0817 21:22:57.307606  218807 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.308049  218807 main.go:141] libmachine: (functional-540012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a6:65", ip: ""} in network mk-functional-540012: {Iface:virbr1 ExpiryTime:2023-08-17 22:19:57 +0000 UTC Type:0 Mac:52:54:00:b3:a6:65 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-540012 Clientid:01:52:54:00:b3:a6:65}
I0817 21:22:57.308082  218807 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined IP address 192.168.50.42 and MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.308265  218807 main.go:141] libmachine: (functional-540012) Calling .GetSSHPort
I0817 21:22:57.308449  218807 main.go:141] libmachine: (functional-540012) Calling .GetSSHKeyPath
I0817 21:22:57.308650  218807 main.go:141] libmachine: (functional-540012) Calling .GetSSHUsername
I0817 21:22:57.308828  218807 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/functional-540012/id_rsa Username:docker}
I0817 21:22:57.400380  218807 ssh_runner.go:195] Run: sudo crictl images --output json
I0817 21:22:57.446554  218807 main.go:141] libmachine: Making call to close driver server
I0817 21:22:57.446576  218807 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:22:57.446872  218807 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:22:57.446897  218807 main.go:141] libmachine: Making call to close connection to plugin binary
I0817 21:22:57.446914  218807 main.go:141] libmachine: Making call to close driver server
I0817 21:22:57.446923  218807 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:22:57.447222  218807 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:22:57.447247  218807 main.go:141] libmachine: Making call to close connection to plugin binary
I0817 21:22:57.447270  218807 main.go:141] libmachine: (functional-540012) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-540012 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| docker.io/library/nginx                 | latest             | eea7b3dcba7ee | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-540012  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-540012  | ef95fd69bb8ec | 3.34kB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-controller-manager | v1.27.4            | f466468864b7a | 114MB  |
| registry.k8s.io/kube-proxy              | v1.27.4            | 6848d7eda0341 | 72.7MB |
| registry.k8s.io/kube-scheduler          | v1.27.4            | 98ef2570f3cde | 59.8MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/kube-apiserver          | v1.27.4            | e7972205b6614 | 122MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-540012 image ls --format table --alsologtostderr:
I0817 21:22:57.763679  218917 out.go:296] Setting OutFile to fd 1 ...
I0817 21:22:57.763944  218917 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.763956  218917 out.go:309] Setting ErrFile to fd 2...
I0817 21:22:57.763963  218917 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.764296  218917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
I0817 21:22:57.764965  218917 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.765061  218917 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.765396  218917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.765455  218917 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.780821  218917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35729
I0817 21:22:57.781314  218917 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.781983  218917 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.782012  218917 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.782407  218917 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.782627  218917 main.go:141] libmachine: (functional-540012) Calling .GetState
I0817 21:22:57.784744  218917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.784816  218917 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.800177  218917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
I0817 21:22:57.800621  218917 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.801178  218917 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.801202  218917 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.801517  218917 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.801720  218917 main.go:141] libmachine: (functional-540012) Calling .DriverName
I0817 21:22:57.801946  218917 ssh_runner.go:195] Run: systemctl --version
I0817 21:22:57.801976  218917 main.go:141] libmachine: (functional-540012) Calling .GetSSHHostname
I0817 21:22:57.804817  218917 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.805502  218917 main.go:141] libmachine: (functional-540012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a6:65", ip: ""} in network mk-functional-540012: {Iface:virbr1 ExpiryTime:2023-08-17 22:19:57 +0000 UTC Type:0 Mac:52:54:00:b3:a6:65 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-540012 Clientid:01:52:54:00:b3:a6:65}
I0817 21:22:57.805538  218917 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined IP address 192.168.50.42 and MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.805639  218917 main.go:141] libmachine: (functional-540012) Calling .GetSSHPort
I0817 21:22:57.805838  218917 main.go:141] libmachine: (functional-540012) Calling .GetSSHKeyPath
I0817 21:22:57.805990  218917 main.go:141] libmachine: (functional-540012) Calling .GetSSHUsername
I0817 21:22:57.806134  218917 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/functional-540012/id_rsa Username:docker}
I0817 21:22:57.901872  218917 ssh_runner.go:195] Run: sudo crictl images --output json
I0817 21:22:57.944858  218917 main.go:141] libmachine: Making call to close driver server
I0817 21:22:57.944880  218917 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:22:57.945205  218917 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:22:57.945226  218917 main.go:141] libmachine: Making call to close connection to plugin binary
I0817 21:22:57.945243  218917 main.go:141] libmachine: Making call to close driver server
I0817 21:22:57.945244  218917 main.go:141] libmachine: (functional-540012) DBG | Closing plugin on server side
I0817 21:22:57.945253  218917 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:22:57.945502  218917 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:22:57.945518  218917 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-540012 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e
7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d","registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"122078160"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTa
gs":[],"size":"43824855"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c
9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265","registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"113931062"},{"id":"98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16","repoDigests":["registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af","registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"59814710"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"]
,"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24","repoDigests":["docker.io/library/nginx@sha256:13d22ec63300e16014d4a42aed735207a8b33c223cff19627dd3042e5a10a3a0","docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820092"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/add
on-resizer:functional-540012"],"size":"34114467"},{"id":"ef95fd69bb8ec90a24c9a6b937e5da58ecb417ad81ce8ea3969337110a4c2506","repoDigests":["localhost/minikube-local-cache-test@sha256:e403c81470f57c8be4e2f147331404c0ecbce871f6ee975ebf7894617bedbedc"],"repoTags":["localhost/minikube-local-cache-test:functional-540012"],"size":"3343"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4","repoDigests":["registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf","reg
istry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"72714135"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-540012 image ls --format json --alsologtostderr:
I0817 21:22:57.523821  218865 out.go:296] Setting OutFile to fd 1 ...
I0817 21:22:57.523934  218865 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.523942  218865 out.go:309] Setting ErrFile to fd 2...
I0817 21:22:57.523946  218865 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.524204  218865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
I0817 21:22:57.524816  218865 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.524930  218865 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.525435  218865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.525493  218865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.540796  218865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
I0817 21:22:57.541328  218865 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.541902  218865 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.541925  218865 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.542361  218865 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.542580  218865 main.go:141] libmachine: (functional-540012) Calling .GetState
I0817 21:22:57.544533  218865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.544576  218865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.559171  218865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43659
I0817 21:22:57.559581  218865 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.560182  218865 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.560221  218865 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.560587  218865 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.560762  218865 main.go:141] libmachine: (functional-540012) Calling .DriverName
I0817 21:22:57.560964  218865 ssh_runner.go:195] Run: systemctl --version
I0817 21:22:57.560996  218865 main.go:141] libmachine: (functional-540012) Calling .GetSSHHostname
I0817 21:22:57.564087  218865 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.564465  218865 main.go:141] libmachine: (functional-540012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a6:65", ip: ""} in network mk-functional-540012: {Iface:virbr1 ExpiryTime:2023-08-17 22:19:57 +0000 UTC Type:0 Mac:52:54:00:b3:a6:65 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-540012 Clientid:01:52:54:00:b3:a6:65}
I0817 21:22:57.564510  218865 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined IP address 192.168.50.42 and MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.564637  218865 main.go:141] libmachine: (functional-540012) Calling .GetSSHPort
I0817 21:22:57.564853  218865 main.go:141] libmachine: (functional-540012) Calling .GetSSHKeyPath
I0817 21:22:57.565052  218865 main.go:141] libmachine: (functional-540012) Calling .GetSSHUsername
I0817 21:22:57.565230  218865 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/functional-540012/id_rsa Username:docker}
I0817 21:22:57.670409  218865 ssh_runner.go:195] Run: sudo crictl images --output json
I0817 21:22:57.709074  218865 main.go:141] libmachine: Making call to close driver server
I0817 21:22:57.709093  218865 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:22:57.709449  218865 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:22:57.709472  218865 main.go:141] libmachine: Making call to close connection to plugin binary
I0817 21:22:57.709489  218865 main.go:141] libmachine: Making call to close driver server
I0817 21:22:57.709498  218865 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:22:57.709772  218865 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:22:57.709792  218865 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-540012 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ef95fd69bb8ec90a24c9a6b937e5da58ecb417ad81ce8ea3969337110a4c2506
repoDigests:
- localhost/minikube-local-cache-test@sha256:e403c81470f57c8be4e2f147331404c0ecbce871f6ee975ebf7894617bedbedc
repoTags:
- localhost/minikube-local-cache-test:functional-540012
size: "3343"
- id: f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265
- registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "113931062"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d
- registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "122078160"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af
- registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "59814710"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24
repoDigests:
- docker.io/library/nginx@sha256:13d22ec63300e16014d4a42aed735207a8b33c223cff19627dd3042e5a10a3a0
- docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35
repoTags:
- docker.io/library/nginx:latest
size: "190820092"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-540012
size: "34114467"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: 6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf
- registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "72714135"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-540012 image ls --format yaml --alsologtostderr:
I0817 21:22:57.265831  218808 out.go:296] Setting OutFile to fd 1 ...
I0817 21:22:57.265946  218808 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.265955  218808 out.go:309] Setting ErrFile to fd 2...
I0817 21:22:57.265959  218808 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.266202  218808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
I0817 21:22:57.266791  218808 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.266889  218808 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.267256  218808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.267316  218808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.282502  218808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
I0817 21:22:57.283015  218808 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.283703  218808 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.283732  218808 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.284124  218808 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.284343  218808 main.go:141] libmachine: (functional-540012) Calling .GetState
I0817 21:22:57.286847  218808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.286908  218808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.302148  218808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
I0817 21:22:57.302631  218808 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.303225  218808 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.303252  218808 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.303589  218808 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.303773  218808 main.go:141] libmachine: (functional-540012) Calling .DriverName
I0817 21:22:57.303964  218808 ssh_runner.go:195] Run: systemctl --version
I0817 21:22:57.303998  218808 main.go:141] libmachine: (functional-540012) Calling .GetSSHHostname
I0817 21:22:57.307341  218808 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.307766  218808 main.go:141] libmachine: (functional-540012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a6:65", ip: ""} in network mk-functional-540012: {Iface:virbr1 ExpiryTime:2023-08-17 22:19:57 +0000 UTC Type:0 Mac:52:54:00:b3:a6:65 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-540012 Clientid:01:52:54:00:b3:a6:65}
I0817 21:22:57.307807  218808 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined IP address 192.168.50.42 and MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.308020  218808 main.go:141] libmachine: (functional-540012) Calling .GetSSHPort
I0817 21:22:57.308264  218808 main.go:141] libmachine: (functional-540012) Calling .GetSSHKeyPath
I0817 21:22:57.308448  218808 main.go:141] libmachine: (functional-540012) Calling .GetSSHUsername
I0817 21:22:57.308592  218808 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/functional-540012/id_rsa Username:docker}
I0817 21:22:57.410503  218808 ssh_runner.go:195] Run: sudo crictl images --output json
I0817 21:22:57.471229  218808 main.go:141] libmachine: Making call to close driver server
I0817 21:22:57.471243  218808 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:22:57.471547  218808 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:22:57.471571  218808 main.go:141] libmachine: Making call to close connection to plugin binary
I0817 21:22:57.471590  218808 main.go:141] libmachine: Making call to close driver server
I0817 21:22:57.471602  218808 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:22:57.471844  218808 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:22:57.471857  218808 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-540012 ssh pgrep buildkitd: exit status 1 (221.452231ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image build -t localhost/my-image:functional-540012 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 image build -t localhost/my-image:functional-540012 testdata/build --alsologtostderr: (2.356347098s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-540012 image build -t localhost/my-image:functional-540012 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 463b6c4d0e3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-540012
--> fccae037ad7
Successfully tagged localhost/my-image:functional-540012
fccae037ad7d846fa90b3aaaa54e24768000ad2f99586f17524857c8ac06c128
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-540012 image build -t localhost/my-image:functional-540012 testdata/build --alsologtostderr:
I0817 21:22:57.723581  218906 out.go:296] Setting OutFile to fd 1 ...
I0817 21:22:57.723726  218906 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.723739  218906 out.go:309] Setting ErrFile to fd 2...
I0817 21:22:57.723743  218906 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:22:57.724049  218906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
I0817 21:22:57.724887  218906 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.725483  218906 config.go:182] Loaded profile config "functional-540012": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:22:57.725898  218906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.725958  218906 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.741611  218906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
I0817 21:22:57.742109  218906 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.742773  218906 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.742801  218906 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.743187  218906 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.743418  218906 main.go:141] libmachine: (functional-540012) Calling .GetState
I0817 21:22:57.745450  218906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0817 21:22:57.745519  218906 main.go:141] libmachine: Launching plugin server for driver kvm2
I0817 21:22:57.762076  218906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
I0817 21:22:57.762763  218906 main.go:141] libmachine: () Calling .GetVersion
I0817 21:22:57.763488  218906 main.go:141] libmachine: Using API Version  1
I0817 21:22:57.763512  218906 main.go:141] libmachine: () Calling .SetConfigRaw
I0817 21:22:57.763877  218906 main.go:141] libmachine: () Calling .GetMachineName
I0817 21:22:57.764057  218906 main.go:141] libmachine: (functional-540012) Calling .DriverName
I0817 21:22:57.764241  218906 ssh_runner.go:195] Run: systemctl --version
I0817 21:22:57.764269  218906 main.go:141] libmachine: (functional-540012) Calling .GetSSHHostname
I0817 21:22:57.767547  218906 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.768066  218906 main.go:141] libmachine: (functional-540012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a6:65", ip: ""} in network mk-functional-540012: {Iface:virbr1 ExpiryTime:2023-08-17 22:19:57 +0000 UTC Type:0 Mac:52:54:00:b3:a6:65 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-540012 Clientid:01:52:54:00:b3:a6:65}
I0817 21:22:57.768099  218906 main.go:141] libmachine: (functional-540012) DBG | domain functional-540012 has defined IP address 192.168.50.42 and MAC address 52:54:00:b3:a6:65 in network mk-functional-540012
I0817 21:22:57.768246  218906 main.go:141] libmachine: (functional-540012) Calling .GetSSHPort
I0817 21:22:57.768471  218906 main.go:141] libmachine: (functional-540012) Calling .GetSSHKeyPath
I0817 21:22:57.768666  218906 main.go:141] libmachine: (functional-540012) Calling .GetSSHUsername
I0817 21:22:57.768872  218906 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/functional-540012/id_rsa Username:docker}
I0817 21:22:57.860866  218906 build_images.go:151] Building image from path: /tmp/build.2539173922.tar
I0817 21:22:57.860954  218906 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0817 21:22:57.872126  218906 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2539173922.tar
I0817 21:22:57.877541  218906 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2539173922.tar: stat -c "%s %y" /var/lib/minikube/build/build.2539173922.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2539173922.tar': No such file or directory
I0817 21:22:57.877589  218906 ssh_runner.go:362] scp /tmp/build.2539173922.tar --> /var/lib/minikube/build/build.2539173922.tar (3072 bytes)
I0817 21:22:57.909858  218906 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2539173922
I0817 21:22:57.923303  218906 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2539173922 -xf /var/lib/minikube/build/build.2539173922.tar
I0817 21:22:57.934964  218906 crio.go:297] Building image: /var/lib/minikube/build/build.2539173922
I0817 21:22:57.935050  218906 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-540012 /var/lib/minikube/build/build.2539173922 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0817 21:22:59.997476  218906 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-540012 /var/lib/minikube/build/build.2539173922 --cgroup-manager=cgroupfs: (2.062397869s)
I0817 21:22:59.997551  218906 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2539173922
I0817 21:23:00.012574  218906 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2539173922.tar
I0817 21:23:00.023975  218906 build_images.go:207] Built localhost/my-image:functional-540012 from /tmp/build.2539173922.tar
I0817 21:23:00.024029  218906 build_images.go:123] succeeded building to: functional-540012
I0817 21:23:00.024036  218906 build_images.go:124] failed building to: 
I0817 21:23:00.024103  218906 main.go:141] libmachine: Making call to close driver server
I0817 21:23:00.024118  218906 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:23:00.024488  218906 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:23:00.024512  218906 main.go:141] libmachine: Making call to close connection to plugin binary
I0817 21:23:00.024585  218906 main.go:141] libmachine: (functional-540012) DBG | Closing plugin on server side
I0817 21:23:00.024610  218906 main.go:141] libmachine: Making call to close driver server
I0817 21:23:00.024627  218906 main.go:141] libmachine: (functional-540012) Calling .Close
I0817 21:23:00.024932  218906 main.go:141] libmachine: (functional-540012) DBG | Closing plugin on server side
I0817 21:23:00.024997  218906 main.go:141] libmachine: Successfully made call to close driver server
I0817 21:23:00.025026  218906 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls
E0817 21:23:09.344931  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:09.351070  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:09.361418  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:09.381787  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:09.422169  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:09.502580  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:09.663055  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:09.983677  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:10.624719  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:11.905246  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:14.467067  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:19.587640  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
2023/08/17 21:23:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-540012
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (9.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image load --daemon gcr.io/google-containers/addon-resizer:functional-540012 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 image load --daemon gcr.io/google-containers/addon-resizer:functional-540012 --alsologtostderr: (8.742218479s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (9.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image load --daemon gcr.io/google-containers/addon-resizer:functional-540012 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 image load --daemon gcr.io/google-containers/addon-resizer:functional-540012 --alsologtostderr: (6.448445778s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-540012
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image load --daemon gcr.io/google-containers/addon-resizer:functional-540012 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 image load --daemon gcr.io/google-containers/addon-resizer:functional-540012 --alsologtostderr: (5.491413339s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image save gcr.io/google-containers/addon-resizer:functional-540012 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 image save gcr.io/google-containers/addon-resizer:functional-540012 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.010966976s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image rm gcr.io/google-containers/addon-resizer:functional-540012 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.693710873s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-540012
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-540012 image save --daemon gcr.io/google-containers/addon-resizer:functional-540012 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-540012 image save --daemon gcr.io/google-containers/addon-resizer:functional-540012 --alsologtostderr: (1.221924817s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-540012
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-540012
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-540012
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-540012
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (114.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-449686 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0817 21:23:29.828068  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:23:50.308380  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:24:31.268677  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-449686 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m54.750024581s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (114.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-449686 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-449686 addons enable ingress --alsologtostderr -v=5: (13.621232736s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.62s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-449686 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-716034 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0817 21:28:29.475090  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:28:37.030118  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-716034 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.447660627s)
--- PASS: TestJSONOutput/start/Command (62.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-716034 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-716034 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-716034 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-716034 --output=json --user=testUser: (7.094082776s)
--- PASS: TestJSONOutput/stop/Command (7.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-100000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-100000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.962746ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"89286e25-696f-456f-a45c-2bb5d389ff80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-100000] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f02391e3-a205-43e2-a720-426661dc18cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16865"}}
	{"specversion":"1.0","id":"be94fe71-20eb-4438-b480-c59d62ef2516","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6690ca56-d09b-4f8b-b7f3-f080573a6db4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig"}}
	{"specversion":"1.0","id":"bd4383ed-2749-4af6-be85-b195302fc443","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube"}}
	{"specversion":"1.0","id":"cea0ae40-0a62-4a5e-a04d-df39e2d9e05a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4f5139ad-b0b9-4dc9-9554-f4796a66c4b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4a1d03e0-d594-48eb-bcc8-86b21e19fb63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-100000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-100000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (101.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-074584 --driver=kvm2  --container-runtime=crio
E0817 21:29:51.396635  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-074584 --driver=kvm2  --container-runtime=crio: (48.601779516s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-078293 --driver=kvm2  --container-runtime=crio
E0817 21:30:31.666303  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:31.671659  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:31.681966  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:31.702315  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:31.742705  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:31.823130  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:31.983607  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:32.304269  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:32.945373  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:34.226380  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:36.787307  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:41.908156  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:30:52.148474  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-078293 --driver=kvm2  --container-runtime=crio: (49.971832207s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-074584
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-078293
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-078293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-078293
E0817 21:31:12.628828  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "first-074584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-074584
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-074584: (1.000623786s)
--- PASS: TestMinikubeProfile (101.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-022661 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-022661 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.668003026s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-022661 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-022661 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-043589 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0817 21:31:53.589057  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:32:07.553565  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-043589 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.880069028s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-043589 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-043589 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-022661 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-043589 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-043589 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-043589
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-043589: (1.230662912s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-043589
E0817 21:32:35.239288  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-043589: (22.115027451s)
--- PASS: TestMountStart/serial/RestartStopped (23.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-043589 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-043589 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959371 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0817 21:33:09.344262  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:33:15.509508  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-959371 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.995298118s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-959371 -- rollout status deployment/busybox: (3.025603326s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-65x2b -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-9c77m -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-65x2b -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-9c77m -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-65x2b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959371 -- exec busybox-67b7f59bb-9c77m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-959371 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-959371 -v 3 --alsologtostderr: (39.902588879s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.50s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp testdata/cp-test.txt multinode-959371:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp multinode-959371:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1606715842/001/cp-test_multinode-959371.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp multinode-959371:/home/docker/cp-test.txt multinode-959371-m02:/home/docker/cp-test_multinode-959371_multinode-959371-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m02 "sudo cat /home/docker/cp-test_multinode-959371_multinode-959371-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp multinode-959371:/home/docker/cp-test.txt multinode-959371-m03:/home/docker/cp-test_multinode-959371_multinode-959371-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m03 "sudo cat /home/docker/cp-test_multinode-959371_multinode-959371-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp testdata/cp-test.txt multinode-959371-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp multinode-959371-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1606715842/001/cp-test_multinode-959371-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp multinode-959371-m02:/home/docker/cp-test.txt multinode-959371:/home/docker/cp-test_multinode-959371-m02_multinode-959371.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371 "sudo cat /home/docker/cp-test_multinode-959371-m02_multinode-959371.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp multinode-959371-m02:/home/docker/cp-test.txt multinode-959371-m03:/home/docker/cp-test_multinode-959371-m02_multinode-959371-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m03 "sudo cat /home/docker/cp-test_multinode-959371-m02_multinode-959371-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp testdata/cp-test.txt multinode-959371-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp multinode-959371-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1606715842/001/cp-test_multinode-959371-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp multinode-959371-m03:/home/docker/cp-test.txt multinode-959371:/home/docker/cp-test_multinode-959371-m03_multinode-959371.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371 "sudo cat /home/docker/cp-test_multinode-959371-m03_multinode-959371.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 cp multinode-959371-m03:/home/docker/cp-test.txt multinode-959371-m02:/home/docker/cp-test_multinode-959371-m03_multinode-959371-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 ssh -n multinode-959371-m02 "sudo cat /home/docker/cp-test_multinode-959371-m03_multinode-959371-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 node stop m03
E0817 21:35:31.665278  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-959371 node stop m03: (2.08357183s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-959371 status: exit status 7 (457.742487ms)

                                                
                                                
-- stdout --
	multinode-959371
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-959371-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-959371-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-959371 status --alsologtostderr: exit status 7 (447.602225ms)

                                                
                                                
-- stdout --
	multinode-959371
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-959371-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-959371-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:35:33.541001  225896 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:35:33.541137  225896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:35:33.541146  225896 out.go:309] Setting ErrFile to fd 2...
	I0817 21:35:33.541150  225896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:35:33.541359  225896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 21:35:33.541534  225896 out.go:303] Setting JSON to false
	I0817 21:35:33.541561  225896 mustload.go:65] Loading cluster: multinode-959371
	I0817 21:35:33.541691  225896 notify.go:220] Checking for updates...
	I0817 21:35:33.541971  225896 config.go:182] Loaded profile config "multinode-959371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:35:33.541985  225896 status.go:255] checking status of multinode-959371 ...
	I0817 21:35:33.542415  225896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:35:33.542483  225896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:35:33.558525  225896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0817 21:35:33.558967  225896 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:35:33.559591  225896 main.go:141] libmachine: Using API Version  1
	I0817 21:35:33.559612  225896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:35:33.560058  225896 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:35:33.560289  225896 main.go:141] libmachine: (multinode-959371) Calling .GetState
	I0817 21:35:33.562203  225896 status.go:330] multinode-959371 host status = "Running" (err=<nil>)
	I0817 21:35:33.562225  225896 host.go:66] Checking if "multinode-959371" exists ...
	I0817 21:35:33.562668  225896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:35:33.562710  225896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:35:33.578177  225896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40605
	I0817 21:35:33.578644  225896 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:35:33.579155  225896 main.go:141] libmachine: Using API Version  1
	I0817 21:35:33.579185  225896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:35:33.579546  225896 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:35:33.579736  225896 main.go:141] libmachine: (multinode-959371) Calling .GetIP
	I0817 21:35:33.582798  225896 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:35:33.583145  225896 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:35:33.583179  225896 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:35:33.583326  225896 host.go:66] Checking if "multinode-959371" exists ...
	I0817 21:35:33.583669  225896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:35:33.583717  225896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:35:33.599216  225896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0817 21:35:33.599647  225896 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:35:33.600154  225896 main.go:141] libmachine: Using API Version  1
	I0817 21:35:33.600179  225896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:35:33.600497  225896 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:35:33.600689  225896 main.go:141] libmachine: (multinode-959371) Calling .DriverName
	I0817 21:35:33.600870  225896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:35:33.600897  225896 main.go:141] libmachine: (multinode-959371) Calling .GetSSHHostname
	I0817 21:35:33.603489  225896 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:35:33.603914  225896 main.go:141] libmachine: (multinode-959371) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:61:ee", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:32:56 +0000 UTC Type:0 Mac:52:54:00:b5:61:ee Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-959371 Clientid:01:52:54:00:b5:61:ee}
	I0817 21:35:33.603948  225896 main.go:141] libmachine: (multinode-959371) DBG | domain multinode-959371 has defined IP address 192.168.39.104 and MAC address 52:54:00:b5:61:ee in network mk-multinode-959371
	I0817 21:35:33.604054  225896 main.go:141] libmachine: (multinode-959371) Calling .GetSSHPort
	I0817 21:35:33.604221  225896 main.go:141] libmachine: (multinode-959371) Calling .GetSSHKeyPath
	I0817 21:35:33.604346  225896 main.go:141] libmachine: (multinode-959371) Calling .GetSSHUsername
	I0817 21:35:33.604513  225896 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371/id_rsa Username:docker}
	I0817 21:35:33.698230  225896 ssh_runner.go:195] Run: systemctl --version
	I0817 21:35:33.704503  225896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:35:33.719577  225896 kubeconfig.go:92] found "multinode-959371" server: "https://192.168.39.104:8443"
	I0817 21:35:33.719607  225896 api_server.go:166] Checking apiserver status ...
	I0817 21:35:33.719641  225896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:35:33.732706  225896 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1068/cgroup
	I0817 21:35:33.744292  225896 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod1844dfd193c27ced8aa4dba039096475/crio-541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062"
	I0817 21:35:33.744389  225896 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod1844dfd193c27ced8aa4dba039096475/crio-541fc380a4b095aab71356b746e2a6750b034aa1008679ad9632c09e3026f062/freezer.state
	I0817 21:35:33.755096  225896 api_server.go:204] freezer state: "THAWED"
	I0817 21:35:33.755137  225896 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0817 21:35:33.760662  225896 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0817 21:35:33.760700  225896 status.go:421] multinode-959371 apiserver status = Running (err=<nil>)
	I0817 21:35:33.760715  225896 status.go:257] multinode-959371 status: &{Name:multinode-959371 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0817 21:35:33.760738  225896 status.go:255] checking status of multinode-959371-m02 ...
	I0817 21:35:33.761166  225896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:35:33.761208  225896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:35:33.777057  225896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0817 21:35:33.777501  225896 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:35:33.778063  225896 main.go:141] libmachine: Using API Version  1
	I0817 21:35:33.778095  225896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:35:33.778466  225896 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:35:33.778696  225896 main.go:141] libmachine: (multinode-959371-m02) Calling .GetState
	I0817 21:35:33.780211  225896 status.go:330] multinode-959371-m02 host status = "Running" (err=<nil>)
	I0817 21:35:33.780238  225896 host.go:66] Checking if "multinode-959371-m02" exists ...
	I0817 21:35:33.780540  225896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:35:33.780580  225896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:35:33.796401  225896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34491
	I0817 21:35:33.796911  225896 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:35:33.797442  225896 main.go:141] libmachine: Using API Version  1
	I0817 21:35:33.797463  225896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:35:33.797840  225896 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:35:33.798072  225896 main.go:141] libmachine: (multinode-959371-m02) Calling .GetIP
	I0817 21:35:33.800899  225896 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:35:33.801331  225896 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:35:33.801374  225896 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:35:33.801432  225896 host.go:66] Checking if "multinode-959371-m02" exists ...
	I0817 21:35:33.801849  225896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:35:33.801886  225896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:35:33.818404  225896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43211
	I0817 21:35:33.818824  225896 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:35:33.819298  225896 main.go:141] libmachine: Using API Version  1
	I0817 21:35:33.819365  225896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:35:33.819686  225896 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:35:33.819886  225896 main.go:141] libmachine: (multinode-959371-m02) Calling .DriverName
	I0817 21:35:33.820091  225896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:35:33.820114  225896 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHHostname
	I0817 21:35:33.822981  225896 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:35:33.823417  225896 main.go:141] libmachine: (multinode-959371-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:00:c7", ip: ""} in network mk-multinode-959371: {Iface:virbr1 ExpiryTime:2023-08-17 22:34:04 +0000 UTC Type:0 Mac:52:54:00:c1:00:c7 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-959371-m02 Clientid:01:52:54:00:c1:00:c7}
	I0817 21:35:33.823485  225896 main.go:141] libmachine: (multinode-959371-m02) DBG | domain multinode-959371-m02 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:00:c7 in network mk-multinode-959371
	I0817 21:35:33.823594  225896 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHPort
	I0817 21:35:33.823780  225896 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHKeyPath
	I0817 21:35:33.823986  225896 main.go:141] libmachine: (multinode-959371-m02) Calling .GetSSHUsername
	I0817 21:35:33.824147  225896 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16865-203458/.minikube/machines/multinode-959371-m02/id_rsa Username:docker}
	I0817 21:35:33.913455  225896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:35:33.926593  225896 status.go:257] multinode-959371-m02 status: &{Name:multinode-959371-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0817 21:35:33.926639  225896 status.go:255] checking status of multinode-959371-m03 ...
	I0817 21:35:33.926951  225896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0817 21:35:33.926981  225896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0817 21:35:33.943155  225896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0817 21:35:33.943596  225896 main.go:141] libmachine: () Calling .GetVersion
	I0817 21:35:33.944119  225896 main.go:141] libmachine: Using API Version  1
	I0817 21:35:33.944151  225896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0817 21:35:33.944507  225896 main.go:141] libmachine: () Calling .GetMachineName
	I0817 21:35:33.944742  225896 main.go:141] libmachine: (multinode-959371-m03) Calling .GetState
	I0817 21:35:33.946259  225896 status.go:330] multinode-959371-m03 host status = "Stopped" (err=<nil>)
	I0817 21:35:33.946273  225896 status.go:343] host is not running, skipping remaining checks
	I0817 21:35:33.946279  225896 status.go:257] multinode-959371-m03 status: &{Name:multinode-959371-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 node start m03 --alsologtostderr
E0817 21:35:59.350030  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-959371 node start m03 --alsologtostderr: (32.00868903s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-959371 node delete m03: (1.24926217s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (441.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959371 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0817 21:50:31.665529  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:52:07.552934  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 21:53:09.345250  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:55:31.665407  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 21:56:12.391624  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 21:57:07.553426  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-959371 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m21.013454083s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959371 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (441.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-959371
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959371-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-959371-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (68.878087ms)

                                                
                                                
-- stdout --
	* [multinode-959371-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-959371-m02' is duplicated with machine name 'multinode-959371-m02' in profile 'multinode-959371'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959371-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-959371-m03 --driver=kvm2  --container-runtime=crio: (47.866631962s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-959371
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-959371: exit status 80 (253.307288ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-959371
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-959371-m03 already exists in multinode-959371-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-959371-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.21s)

                                                
                                    
x
+
TestScheduledStopUnix (116.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-584092 --memory=2048 --driver=kvm2  --container-runtime=crio
E0817 22:02:07.553635  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-584092 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.87884401s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584092 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-584092 -n scheduled-stop-584092
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584092 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584092 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-584092 -n scheduled-stop-584092
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-584092
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584092 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0817 22:03:09.344854  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-584092
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-584092: exit status 7 (64.435289ms)

                                                
                                                
-- stdout --
	scheduled-stop-584092
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-584092 -n scheduled-stop-584092
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-584092 -n scheduled-stop-584092: exit status 7 (63.591346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-584092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-584092
--- PASS: TestScheduledStopUnix (116.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (155.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-386309 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-386309 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m23.647849681s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-386309
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-386309: (2.352283146s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-386309 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-386309 status --format={{.Host}}: exit status 7 (79.683943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-386309 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-386309 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.432144479s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-386309 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-386309 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-386309 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (107.847897ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-386309] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-386309
	    minikube start -p kubernetes-upgrade-386309 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3863092 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-386309 --kubernetes-version=v1.28.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-386309 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-386309 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (23.859220789s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-386309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-386309
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-386309: (1.248812642s)
--- PASS: TestKubernetesUpgrade (155.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-313827 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-313827 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.721288ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-313827] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (103.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-313827 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-313827 --driver=kvm2  --container-runtime=crio: (1m42.747474823s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-313827 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (103.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-975779 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-975779 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (103.040052ms)

                                                
                                                
-- stdout --
	* [false-975779] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 22:03:22.677222  233831 out.go:296] Setting OutFile to fd 1 ...
	I0817 22:03:22.677348  233831 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:03:22.677359  233831 out.go:309] Setting ErrFile to fd 2...
	I0817 22:03:22.677363  233831 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 22:03:22.677577  233831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-203458/.minikube/bin
	I0817 22:03:22.678247  233831 out.go:303] Setting JSON to false
	I0817 22:03:22.679133  233831 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24328,"bootTime":1692285475,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 22:03:22.679191  233831 start.go:138] virtualization: kvm guest
	I0817 22:03:22.681796  233831 out.go:177] * [false-975779] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 22:03:22.683539  233831 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 22:03:22.685202  233831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 22:03:22.683569  233831 notify.go:220] Checking for updates...
	I0817 22:03:22.686974  233831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-203458/kubeconfig
	I0817 22:03:22.688740  233831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-203458/.minikube
	I0817 22:03:22.690279  233831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 22:03:22.691980  233831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 22:03:22.693944  233831 config.go:182] Loaded profile config "NoKubernetes-313827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:03:22.694116  233831 config.go:182] Loaded profile config "force-systemd-env-330284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:03:22.694240  233831 config.go:182] Loaded profile config "offline-crio-263814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 22:03:22.694344  233831 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 22:03:22.731148  233831 out.go:177] * Using the kvm2 driver based on user configuration
	I0817 22:03:22.732450  233831 start.go:298] selected driver: kvm2
	I0817 22:03:22.732473  233831 start.go:902] validating driver "kvm2" against <nil>
	I0817 22:03:22.732498  233831 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 22:03:22.734858  233831 out.go:177] 
	W0817 22:03:22.736301  233831 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0817 22:03:22.737724  233831 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-975779 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-975779" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-975779

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975779"

                                                
                                                
----------------------- debugLogs end: false-975779 [took: 2.785955548s] --------------------------------
helpers_test.go:175: Cleaning up "false-975779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-975779
--- PASS: TestNetworkPlugins/group/false (3.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (75.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-313827 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-313827 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m14.049352414s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-313827 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-313827 status -o json: exit status 2 (273.334876ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-313827","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-313827
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-313827: (1.034061938s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (75.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-313827 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-313827 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.30086763s)
--- PASS: TestNoKubernetes/serial/Start (28.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-313827 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-313827 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.726739ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-313827
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-313827: (1.350044605s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-313827 --driver=kvm2  --container-runtime=crio
E0817 22:07:07.553173  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-313827 --driver=kvm2  --container-runtime=crio: (45.128477143s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-313827 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-313827 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.435442ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.35s)

                                                
                                    
x
+
TestPause/serial/Start (125.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-668546 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0817 22:08:09.345230  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-668546 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m5.974755274s)
--- PASS: TestPause/serial/Start (125.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (109.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m49.226155944s)
--- PASS: TestNetworkPlugins/group/auto/Start (109.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m25.122812268s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-668546 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0817 22:10:31.665561  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-668546 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (31.278256565s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-668546 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-668546 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-668546 --output=json --layout=cluster: exit status 2 (253.972104ms)

                                                
                                                
-- stdout --
	{"Name":"pause-668546","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-668546","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-668546 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.96s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-668546 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.96s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-668546 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (1.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m33.899818097s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5stkp" [518db838-0140-43a9-a6f4-ca0300d04521] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.022472556s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-975779 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-975779 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-975779 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-46xmf" [e19245eb-6983-466b-97c1-a1a2746e293d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-46xmf" [e19245eb-6983-466b-97c1-a1a2746e293d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.01491021s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-975779 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-tjgxn" [954271e8-84a3-4718-b05d-232813c9deff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-tjgxn" [954271e8-84a3-4718-b05d-232813c9deff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.013956635s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-975779 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-975779 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m30.03784334s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (135.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m15.98225424s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (135.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xpgrw" [6e78695d-10e2-4cd1-b564-bf4096fcc873] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.03247071s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-975779 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-975779 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-vzd9x" [daf82e32-9029-4f70-b7ae-a5518ccb53bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-vzd9x" [daf82e32-9029-4f70-b7ae-a5518ccb53bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.01171843s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-975779 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (103.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0817 22:12:52.392909  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m43.962684619s)
--- PASS: TestNetworkPlugins/group/flannel/Start (103.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-975779 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-975779 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-xvq9v" [9e20ed8b-9558-40be-9e63-830b0c3f75e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-xvq9v" [9e20ed8b-9558-40be-9e63-830b0c3f75e9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.015676949s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-975779 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-717933
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (126.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-975779 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m6.451800906s)
--- PASS: TestNetworkPlugins/group/bridge/Start (126.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (189.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-294781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-294781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (3m9.986666624s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (189.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-975779 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-975779 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-wtxgr" [05dfab7a-5a94-48dc-b8bb-4d1cf4ce26ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-wtxgr" [05dfab7a-5a94-48dc-b8bb-4d1cf4ce26ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.014436511s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-975779 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (145s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-525875 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-525875 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (2m25.004702762s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (145.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-v2vpp" [fda2b0e9-db98-470d-9cec-b197abb25dd4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.042325418s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-975779 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-975779 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-wrgv7" [407d6178-be90-4c23-99df-6ab132e7eb90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-wrgv7" [407d6178-be90-4c23-99df-6ab132e7eb90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.021765483s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-975779 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (106.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-437183 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-437183 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.4: (1m46.90280143s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (106.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-975779 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-975779 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-x4v6s" [3024dbd0-6a58-4e7b-a2ff-a403b06480b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-x4v6s" [3024dbd0-6a58-4e7b-a2ff-a403b06480b1] Running
E0817 22:15:31.664927  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.015932112s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-975779 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-975779 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-321287 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.4
E0817 22:15:52.865805  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:15:55.426110  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:15:55.683587  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:15:55.688947  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:15:55.699316  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:15:55.719687  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:15:55.760038  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:15:55.840496  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:15:56.000961  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:15:56.321369  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:15:56.961690  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:15:58.242807  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:16:00.547321  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:16:00.803804  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:16:05.924751  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:16:10.787515  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:16:16.165748  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:16:31.267940  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-321287 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.4: (1m42.574500521s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-294781 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c06cf854-32b5-42a7-9ef0-0f19ae1aaf6f] Pending
helpers_test.go:344: "busybox" [c06cf854-32b5-42a7-9ef0-0f19ae1aaf6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0817 22:16:36.646886  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c06cf854-32b5-42a7-9ef0-0f19ae1aaf6f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.041796291s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-294781 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-525875 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [120471b2-fc06-44fc-b89c-bdaa40d7bb8d] Pending
helpers_test.go:344: "busybox" [120471b2-fc06-44fc-b89c-bdaa40d7bb8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [120471b2-fc06-44fc-b89c-bdaa40d7bb8d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.041571306s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-525875 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-294781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-294781 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-525875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0817 22:16:50.601656  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-525875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.066794612s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-525875 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-437183 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [94e40ca7-29e4-4ad8-9f77-4bf6a2900b09] Pending
helpers_test.go:344: "busybox" [94e40ca7-29e4-4ad8-9f77-4bf6a2900b09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [94e40ca7-29e4-4ad8-9f77-4bf6a2900b09] Running
E0817 22:17:07.553052  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.025716311s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-437183 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-437183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-437183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.187010173s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-437183 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-321287 create -f testdata/busybox.yaml
E0817 22:17:34.527404  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ae48453-d674-4270-b843-806cccd2bb56] Pending
helpers_test.go:344: "busybox" [3ae48453-d674-4270-b843-806cccd2bb56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ae48453-d674-4270-b843-806cccd2bb56] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.022971517s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-321287 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-321287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-321287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.142604539s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-321287 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (795.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-294781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0817 22:19:18.032063  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-294781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m15.688174637s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-294781 -n old-k8s-version-294781
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (795.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (617.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-525875 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
E0817 22:19:23.932390  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-525875 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (10m17.571399596s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-525875 -n no-preload-525875
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (617.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (636.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-437183 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.4
E0817 22:19:45.524952  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-437183 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.4: (10m36.355749269s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-437183 -n embed-certs-437183
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (636.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (624.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-321287 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.4
E0817 22:20:20.384906  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:20.390184  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:20.400513  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:20.420844  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:20.461219  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:20.541666  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:20.702159  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:21.022902  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:21.663892  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:22.944672  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:25.505535  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:30.625851  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:31.665097  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 22:20:39.952578  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:20:40.866171  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:20:50.284791  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:20:55.683396  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:20:57.207730  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:21:01.346998  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:21:17.990406  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:21:23.369575  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:21:26.813492  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:21:42.308011  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:22:07.553832  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 22:22:14.045666  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:22:19.129331  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:22:41.730310  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:22:56.109428  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:23:04.228965  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:23:09.345431  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 22:23:23.793253  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:23:42.966780  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:24:10.654394  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:24:35.282707  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:25:02.970273  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
E0817 22:25:20.385534  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:25:31.664987  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/ingress-addon-legacy-449686/client.crt: no such file or directory
E0817 22:25:48.070105  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/bridge-975779/client.crt: no such file or directory
E0817 22:25:50.284827  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/kindnet-975779/client.crt: no such file or directory
E0817 22:25:55.683232  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/auto-975779/client.crt: no such file or directory
E0817 22:27:07.553297  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/functional-540012/client.crt: no such file or directory
E0817 22:27:14.045171  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/calico-975779/client.crt: no such file or directory
E0817 22:27:56.109548  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/custom-flannel-975779/client.crt: no such file or directory
E0817 22:28:09.344530  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 22:28:42.966118  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
E0817 22:29:32.393232  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/addons-696435/client.crt: no such file or directory
E0817 22:29:35.283096  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-321287 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.4: (10m24.709962571s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321287 -n default-k8s-diff-port-321287
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (624.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-249978 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
E0817 22:43:42.967141  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/enable-default-cni-975779/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-249978 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (1m0.782227309s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-249978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-249978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.850313861s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-249978 --alsologtostderr -v=3
E0817 22:44:35.283440  210670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-203458/.minikube/profiles/flannel-975779/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-249978 --alsologtostderr -v=3: (11.119061328s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-249978 -n newest-cni-249978
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-249978 -n newest-cni-249978: exit status 7 (63.756319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-249978 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (52.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-249978 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-249978 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (51.652425525s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-249978 -n newest-cni-249978
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (52.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-249978 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-249978 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-249978 -n newest-cni-249978
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-249978 -n newest-cni-249978: exit status 2 (260.2396ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-249978 -n newest-cni-249978
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-249978 -n newest-cni-249978: exit status 2 (262.181246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-249978 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-249978 -n newest-cni-249978
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-249978 -n newest-cni-249978
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.67s)

                                                
                                    

Test skip (39/300)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.27.4/cached-images 0
13 TestDownloadOnly/v1.27.4/binaries 0
14 TestDownloadOnly/v1.27.4/kubectl 0
19 TestDownloadOnly/v1.28.0-rc.1/cached-images 0
20 TestDownloadOnly/v1.28.0-rc.1/binaries 0
21 TestDownloadOnly/v1.28.0-rc.1/kubectl 0
25 TestDownloadOnlyKic 0
36 TestAddons/parallel/Olm 0
46 TestDockerFlags 0
49 TestDockerEnvContainerd 0
51 TestHyperKitDriverInstallOrUpdate 0
52 TestHyperkitDriverSkipUpgrade 0
103 TestFunctional/parallel/DockerEnv 0
104 TestFunctional/parallel/PodmanEnv 0
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
152 TestGvisorAddon 0
153 TestImageBuild 0
186 TestKicCustomNetwork 0
187 TestKicExistingNetwork 0
188 TestKicCustomSubnet 0
189 TestKicStaticIP 0
220 TestChangeNoneUser 0
223 TestScheduledStopWindows 0
225 TestSkaffold 0
227 TestInsufficientStorage 0
231 TestMissingContainerUpgrade 0
236 TestNetworkPlugins/group/kubenet 2.99
245 TestNetworkPlugins/group/cilium 3.23
260 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-975779 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-975779" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-975779

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975779"

                                                
                                                
----------------------- debugLogs end: kubenet-975779 [took: 2.855918466s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-975779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-975779
--- SKIP: TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-975779 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-975779" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-975779

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-975779" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975779"

                                                
                                                
----------------------- debugLogs end: cilium-975779 [took: 3.09071643s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-975779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-975779
--- SKIP: TestNetworkPlugins/group/cilium (3.23s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-340676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-340676
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard